reference
stringlengths
141
444k
target
stringlengths
31
68k
Crowded Scene Analysis: A Survey <s> A. Models in Crowd Dynamics <s> It is suggested that the motion of pedestrians can be described as if they would be subject to ``social forces.'' These ``forces'' are not directly exerted by the pedestrians' personal environment, but they are a measure for the internal motivations of the individuals to perform certain actions (movements). The corresponding force concept is discussed in more detail and can also be applied to the description of other behaviors. In the presented model of pedestrian behavior several force terms are essential: first, a term describing the acceleration towards the desired velocity of motion; second, terms reflecting that a pedestrian keeps a certain distance from other pedestrians and borders; and third, a term modeling attractive effects. The resulting equations of motion of nonlinearly coupled Langevin equations. Computer simulations of crowds of interacting pedestrians show that the social force model is capable of describing the self-organization of several observed collective effects of pedestrian behavior very realistically. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> A. Models in Crowd Dynamics <s> We present a real-time crowd model based on continuum dynamics. In our model, a dynamic potential field simultaneously integrates global navigation with moving obstacles such as other people, efficiently solving for the motion of large crowds without the need for explicit collision avoidance. Simulations created with our system run at interactive rates, demonstrate smooth flow under a variety of conditions, and naturally exhibit emergent phenomena that have been observed in real crowds. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> A. Models in Crowd Dynamics <s> In this paper we introduce a novel method to detect and localize abnormal behaviors in crowd videos using Social Force model. For this purpose, a grid of particles is placed over the image and it is advected with the space-time average of optical flow. By treating the moving particles as individuals, their interaction forces are estimated using social force model. The interaction force is then mapped into the image plane to obtain Force Flow for every pixel in every frame. Randomly selected spatio-temporal volumes of Force Flow are used to model the normal behavior of the crowd. We classify frames as normal and abnormal by using a bag of words approach. The regions of anomalies in the abnormal frames are localized using interaction forces. The experiments are conducted on a publicly available dataset from University of Minnesota for escape panic scenarios and a challenging dataset of crowd videos taken from the web. The experiments show that the proposed method captures the dynamics of the crowd behavior successfully. In addition, we have shown that the social force approach outperforms similar approaches based on pure optical flow. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> A. Models in Crowd Dynamics <s> We propose a new scheme for detecting and localizing the abnormal crowd behavior in video sequences. The proposed method starts from the assumption that the interaction force, as estimated by the Social Force Model (SFM), is a significant feature to analyze crowd behavior. We step forward this hypothesis by optimizing this force using Particle Swarm Optimization (PSO) to perform the advection of a particle population spread randomly over the image frames. The population of particles is drifted towards the areas of the main image motion, driven by the PSO fitness function aimed at minimizing the interaction force, so as to model the most diffused, normal, behavior of the crowd. In this way, anomalies can be detected by checking if some particles (forces) do not fit the estimated distribution, and this is done by a RANSAC-like method followed by a segmentation algorithm to finely localize the abnormal areas. A large set of experiments are carried out on public available datasets, and results show the consistent higher performances of the proposed method as compared to other state-of-the-art algorithms, proving the goodness of the proposed approach. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> A. Models in Crowd Dynamics <s> This paper proposes a novel method to locate crowd behavior instability spatio-temporally using a velocity-field based social force model. Considering the impacts of velocity field on interaction force between individuals, we establish an improved social force model by introducing collision probability in view of velocity distribution. As compared with commonly-used social force model, which defines interaction force as a dependent variable of relative geometric (physical) position of the individuals, this improved model can provide a better prediction of interactions using the collision probability in a dynamic crowd. With spatio-temporal instability analysis, we can extract video clips with potential abnormality and as well locate region of interest where abnormality is likely to happen. The experimental results demonstrate that the proposed method can be applied to detection of abnormal events with high accuracy of instability estimation due to the velocity-field based social force model. <s> BIB005
Crowd dynamics have been studied intensively for more than 40 years. It can be considered as the study of how and where crowds form and move above the critical density level , and how individuals in the crowd interact with each other to influence the crowd status. In the past, studies were mainly conducted in order to support planning of urban infrastructure, e.g., building entrances and corridors . There are two major approaches for computational modeling of crowd behavior: continuum-based approach and agentbased approach. In crowd simulation, both of them have been frequently used in to reproduce crowd phenomena. Continuum-based approach works better at the macroscopic level for medium and high density crowds, while the agentbased approach is more suitable for low density crowds at the microscopic level, where the movement of each individual pedestrian is concerned. For the continuum-based approach, the crowd is treated as a physical fluid with particles, thus a lot of analytical methods from statistical mechanics and thermodynamics are introduced , BIB002 . Hughes et al. have developed a model representing pedestrians as a continuous density field, and have presented a pair of elegant partial differential equations describing the crowd dynamics. Moreover, a realtime crowd model based on continuum dynamics has been presented in BIB002 . It could yield a set of dynamic potentials and velocity fields to guide the individuals' motions. For the agent-based approach, individuals in the crowd are considered as autonomous agents which actively sense the environment and make decisions according to some predefined rules , BIB001 . Following this style, the social force model (SFM), first proposed by Helbing et al. BIB001 , has been proven to be capable of reproducing specific crowd phenomena. The assumption is that the interaction force between pedestrians is a significant feature for analyzing crowd behaviors. The SFM can be formulated as: where m i denotes the mass of the individual, v i indicates its actual velocity which varies given the presence of obstacles in the scene, τ i is a relaxing parameter, F int indicates the interaction force encountered by the individual defined as the sum of attraction and repulsive forces, and v p i is the desired velocity of the individual. Fig. 3 visualizes of the forces and velocities in SFM. Generalized SFM has been adopted as the basic model in many studies of crowd behavior analysis BIB003 , BIB004 , BIB005 . Furthermore, the calibrated agent-based framework proposed in could also accurately model a number of observed crowd phenomena.
Crowded Scene Analysis: A Survey <s> B. Crowd Model in Video Analysis <s> This text develops and applies the techniques used to solve problems in fluid mechanics on computers and describes in detail those most often used in practice. It includes advanced techniques in computational fluid dynamics, such as direct and large-eddy simulation of turbulence, multigrid methods, parallel computing, moving grids, structured, block-structured and unstructured boundary-fitted grids, and free surface flows. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> B. Crowd Model in Video Analysis <s> This work presents an approach for generating video evidence of dangerous situations in crowded scenes. The scenarios of interest are those with high safety risk such as blocked exit, collapse of a person in the crowd, and escape panic. Real visual evidence for these scenarios is rare or unsafe to reproduce in a controllable way. Thus there is a need for simulation to allow training and validation of computer vision systems applied to crowd monitoring. The results shown here demonstrate how to simulate the most important aspects of crowds for performance analysis of computer based video surveillance systems. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> B. Crowd Model in Video Analysis <s> Computer vision algorithms have played a pivotal role in commercial video surveillance systems for a number of years. However, a common weakness among these systems is their inability to handle crowded scenes. In this thesis, we have developed algorithms that overcome some of the challenges encountered in videos of crowded environments such as sporting events, religious festivals, parades, concerts, train stations, airports, and malls. We adopt a top-down approach by first performing a global-level analysis that locates dynamically distinct crowd regions within the video. This knowledge is then employed in the detection of abnormal behaviors and tracking of individual targets within crowds. In addition, the thesis explores the utility of contextual information necessary for persistent tracking and re-acquisition of objects in crowded scenes. ::: For the global-level analysis, a framework based on Lagrangian Particle Dynamics is proposed to segment the scene into dynamically distinct crowd regions or groupings. For this purpose, the spatial extent of the video is treated as a phase space of a time-dependent dynamical system in which transport from one region of the phase space to another is controlled by the optical flow. Next, a grid of particles is advected forward in time through the phase space using a numerical integration to generate a "flow map". The flow map relates the initial positions of particles to their final positions. The spatial gradients of the flow map are used to compute a Cauchy Green Deformation tensor that quantifies the amount by which the neighboring particles diverge over the length of the integration. The maximum eigenvalue of the tensor is used to construct a forward Finite Time Lyapunov Exponent (FTLE) field that reveals the Attracting Lagrangian Coherent Structures (LCS). The same process is repeated by advecting the particles backward in time to obtain a backward FTLE field that reveals the repelling LCS. The attracting and repelling LCS are the time dependent invariant manifolds of the phase space and correspond to the boundaries between dynamically distinct crowd flows. The forward and backward FTLE fields are combined to obtain one scalar field that is segmented using a watershed segmentation algorithm to obtain the labeling of distinct crowd-flow segments. Next, abnormal behaviors within the crowd are localized by detecting changes in the number of crowd-flow segments over time. ::: Next, the global-level knowledge of the scene generated by the crowd-flow segmentation is used as an auxiliary source of information for tracking an individual target within a crowd. This is achieved by developing a scene structure-based force model. This force model captures the notion that an individual, when moving in a particular scene, is subjected to global and local forces that are functions of the layout of that scene and the locomotive behavior of other individuals in his or her vicinity. The key ingredients of the force model are three floor fields that are inspired by research in the field of evacuation dynamics; namely, Static Floor Field (SFF), Dynamic Floor Field (DFF), and Boundary Floor Field (BFF). These fields determine the probability of moving from one location to the next by converting the long-range forces into local forces. The SFF specifies regions of the scene that are attractive in nature, such as an exit location. The DFF, which is based on the idea of active walker models, corresponds to the virtual traces created by the movements of nearby individuals in the scene. The BFF specifies influences exhibited by the barriers within the scene, such as walls and no-entry areas. By combining influence from all three fields with the available appearance information, we are able to track individuals in high-density crowds. The results are reported on real-world sequences of marathons and railway stations that contain thousands of people. A comparative analysis with respect to an appearance-based mean shift tracker is also conducted by generating the ground truth. The result of this analysis demonstrates the benefit of using floor fields in crowded scenes. ::: The occurrence of occlusion is very frequent in crowded scenes due to a high number of interacting objects. To overcome this challenge, we propose an algorithm that has been developed to augment a generic tracking algorithm to perform persistent tracking in crowded environments. The algorithm exploits the contextual knowledge, which is divided into two categories consisting of motion context (MC) and appearance context (AC). The MC is a collection of trajectories that are representative of the motion of the occluded or unobserved object. These trajectories belong to other moving individuals in a given environment. The MC is constructed using a clustering scheme based on the Lyapunov Characteristic Exponent (LCE), which measures the mean exponential rate of convergence or divergence of the nearby trajectories in a given state space. Next, the MC is used to predict the location of the occluded or unobserved object in a regression framework. It is important to note that the LCE is used for measuring divergence between a pair of particles while the FTLE field is obtained by computing the LCE for a grid of particles. The appearance context (AC) of a target object consists of its own appearance history and appearance information of the other objects that are occluded. The intent is to make the appearance descriptor of the target object more discriminative with respect to other unobserved objects, thereby reducing the possible confusion between the unobserved objects upon re-acquisition. This is achieved by learning the distribution of the intra-class variation of each occluded object using all of its previous observations. In addition, a distribution of inter-class variation for each target-unobservable object pair is constructed. Finally, the re-acquisition decision is made using both the MC and the AC. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> B. Crowd Model in Video Analysis <s> This paper presents an algorithm for tracking individual targets in high density crowd scenes containing hundreds of people. Tracking in such a scene is extremely challenging due to the small number of pixels on the target, appearance ambiguity resulting from the dense packing, and severe inter-object occlusions. The novel tracking algorithm, which is outlined in this paper, will overcome these challenges using a scene structure based force model. In this force model an individual, when moving in a particular scene, is subjected to global and local forces that are functions of the layout of that scene and the locomotive behavior of other individuals in the scene. The key ingredients of the force model are three floor fields, which are inspired by the research in the field of evacuation dynamics, namely Static Floor Field (SFF), Dynamic Floor Field (DFF), and Boundary Floor Field (BFF). These fields determine the probability of move from one location to another by converting the long-range forces into local ones. The SFF specifies regions of the scene which are attractive in nature (e.g. an exit location). The DFF specifies the immediate behavior of the crowd in the vicinity of the individual being tracked. The BFF specifies influences exhibited by the barriers in the scene (e.g. walls, no-go areas). By combining cues from all three fields with the available appearance information, we track individual targets in high density crowds. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> B. Crowd Model in Video Analysis <s> In this paper we introduce a novel method to detect and localize abnormal behaviors in crowd videos using Social Force model. For this purpose, a grid of particles is placed over the image and it is advected with the space-time average of optical flow. By treating the moving particles as individuals, their interaction forces are estimated using social force model. The interaction force is then mapped into the image plane to obtain Force Flow for every pixel in every frame. Randomly selected spatio-temporal volumes of Force Flow are used to model the normal behavior of the crowd. We classify frames as normal and abnormal by using a bag of words approach. The regions of anomalies in the abnormal frames are localized using interaction forces. The experiments are conducted on a publicly available dataset from University of Minnesota for escape panic scenarios and a challenging dataset of crowd videos taken from the web. The experiments show that the proposed method captures the dynamics of the crowd behavior successfully. In addition, we have shown that the social force approach outperforms similar approaches based on pure optical flow. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> B. Crowd Model in Video Analysis <s> This paper presents a target tracking framework for unstructured crowded scenes. Unstructured crowded scenes are defined as those scenes where the motion of a crowd appears to be random with different participants moving in different directions over time. This means each spatial location in such scenes supports more than one, or multi-modal, crowd behavior. The case of tracking in structured crowded scenes, where the crowd moves coherently in a common direction, and the direction of motion does not vary over time, was previously handled in [1]. In this work, we propose to model various crowd behavior (or motion) modalities at different locations of the scene by employing Correlated Topic Model (CTM) of [16]. In our construction, words correspond to low level quantized motion features and topics correspond to crowd behaviors. It is then assumed that motion at each location in an unstructured crowd scene is generated by a set of behavior proportions, where behaviors represent distributions over low-level motion features. This way any one location in the scene may support multiple crowd behavior modalities and can be used as prior information for tracking. Our approach enables us to model a diverse set of unstructured crowd domains, which range from cluttered time-lapse microscopy videos of cell populations in vitro, to footage of crowded sporting events. <s> BIB006 </s> Crowded Scene Analysis: A Survey <s> B. Crowd Model in Video Analysis <s> A novel method for crowd flow modeling and anomaly detection is proposed for both coherent and incoherent scenes. The novelty is revealed in three aspects. First, it is a unique utilization of particle trajectories for modeling crowded scenes, in which we propose new and efficient representative trajectories for modeling arbitrarily complicated crowd flows. Second, chaotic dynamics are introduced into the crowd context to characterize complicated crowd motions by regulating a set of chaotic invariant features, which are reliably computed and used for detecting anomalies. Third, a probabilistic framework for anomaly detection and localization is formulated. The overall work-flow begins with particle advection based on optical flow. Then particle trajectories are clustered to obtain representative trajectories for a crowd flow. Next, the chaotic dynamics of all representative trajectories are extracted and quantified using chaotic invariants known as maximal Lyapunov exponent and correlation dimension. Probabilistic model is learned from these chaotic feature set, and finally, a maximum likelihood estimation criterion is adopted to identify a query video of a scene as normal or abnormal. Furthermore, an effective anomaly localization algorithm is designed to locate the position and size of an anomaly. Experiments are conducted on known crowd data set, and results show that our method achieves higher accuracy in anomaly detection and can effectively localize anomalies. <s> BIB007 </s> Crowded Scene Analysis: A Survey <s> B. Crowd Model in Video Analysis <s> Based on the Lagrangian framework for fluid dynamics, a streakline representation of flowis presented to solve computer vision problems involving crowd and traffic flow. Streaklines are traced in a fluid flow by injecting color material, such as smoke or dye, which is transported with the flow and used for visualization. In the context of computer vision, streaklines may be used in a similar way to transport information about a scene, and they are obtained by repeatedly initializing a fixed grid of particles at each frame, then moving both current and past particles using optical flow. Streaklines are the locus of points that connect particles which originated from the same initial position. In this paper, a streakline technique is developed to compute several important aspects of a scene, such as flow and potential functions using the Helmholtz decomposition theorem. This leads to a representation of the flow that more accurately recognizes spatial and temporal changes in the scene, compared with other commonly used flow representations. Applications of the technique to segmentation and behavior analysis provide comparison to previously employed techniques, showing that the streakline method outperforms the state-of-theart in segmentation, and opening a new domain of application for crowd analysis based on potentials. <s> BIB008 </s> Crowded Scene Analysis: A Survey <s> B. Crowd Model in Video Analysis <s> Analyzing the crowd dynamics from video sequences is an open challenge in computer vision. Under a high crowd density assumption, we characterize the dynamics of the crowd flow by two related information: velocity and a disturbance potential which accounts for several elements likely to disturb the flow (the density of pedestrians, their interactions with the flow and the environment). The aim of this paper to simultaneously estimate from a sequence of crowded images those two quantities. While the velocity of the flow can be observed directly from the images with traditional techniques, this disturbance potential is far more trickier to estimate. We propose here to couple, through optimal control theory, a dynamical crowd evolution model with observations from the image sequence in order to estimate at the same time those two quantities from a video sequence. For this purpose, we derive a new and original continuum formulation of the crowd dynamics which appears to be well adapted to dense crowd video sequences. We demonstrate the efficiency of our approach on both synthetic and real crowd videos. <s> BIB009 </s> Crowded Scene Analysis: A Survey <s> B. Crowd Model in Video Analysis <s> This paper proposes a novel method to locate crowd behavior instability spatio-temporally using a velocity-field based social force model. Considering the impacts of velocity field on interaction force between individuals, we establish an improved social force model by introducing collision probability in view of velocity distribution. As compared with commonly-used social force model, which defines interaction force as a dependent variable of relative geometric (physical) position of the individuals, this improved model can provide a better prediction of interactions using the collision probability in a dynamic crowd. With spatio-temporal instability analysis, we can extract video clips with potential abnormality and as well locate region of interest where abnormality is likely to happen. The experimental results demonstrate that the proposed method can be applied to detection of abnormal events with high accuracy of instability estimation due to the velocity-field based social force model. <s> BIB010 </s> Crowded Scene Analysis: A Survey <s> B. Crowd Model in Video Analysis <s> In this paper, a Random Field Topic (RFT) model is proposed for semantic region analysis from motions of objects in crowded scenes. Different from existing approaches of learning semantic regions either from optical flows or from complete trajectories, our model assumes that fragments of trajectories (called tracklets) are observed in crowded scenes. It advances the existing Latent Dirichlet Allocation topic model, by integrating the Markov random fields (MR-F) as prior to enforce the spatial and temporal coherence between tracklets during the learning process. Two kinds of MRF, pairwise MRF and the forest of randomly spanning trees, are defined. Another contribution of this model is to include sources and sinks as high-level semantic prior, which effectively improves the learning of semantic regions and the clustering of tracklets. Experiments on a large scale data set, which includes 40, 000+ tracklets collected from the crowded New York Grand Central station, show that our model outperforms state-of-the-art methods both on qualitative results of learning semantic regions and on quantitative results of clustering tracklets. <s> BIB011 </s> Crowded Scene Analysis: A Survey <s> B. Crowd Model in Video Analysis <s> As surveillance becomes ubiquitous, the amount of data to be processed grows along with the demand for manpower to interpret the data. A key goal of surveillance is to detect behaviors that can be considered anomalous. As a result, an extensive body of research in automated surveillance has been developed, often with the goal of automatic detection of anomalies. Research into anomaly detection in automated surveillance covers a wide range of domains, employing a vast array of techniques. This review presents an overview of recent research approaches on the topic of anomaly detection in automated surveillance. The reviewed studies are analyzed across five aspects: surveillance target, anomaly definitions and assumptions, types of sensors used and the feature extraction processes, learning methods, and modeling algorithms. <s> BIB012 </s> Crowded Scene Analysis: A Survey <s> B. Crowd Model in Video Analysis <s> In this paper, a new Mixture model of Dynamic pedestrian-Agents (MDA) is proposed to learn the collective behavior patterns of pedestrians in crowded scenes. Collective behaviors characterize the intrinsic dynamics of the crowd. From the agent-based modeling, each pedestrian in the crowd is driven by a dynamic pedestrian-agent, which is a linear dynamic system with its initial and termination states reflecting a pedestrian's belief of the starting point and the destination. Then the whole crowd is modeled as a mixture of dynamic pedestrian-agents. Once the model is unsupervisedly learned from real data, MDA can simulate the crowd behaviors. Furthermore, MDA can well infer the past behaviors and predict the future behaviors of pedestrians given their trajectories only partially observed, and classify different pedestrian behaviors in the scene. The effectiveness of MDA and its applications are demonstrated by qualitative and quantitative experiments on the video surveillance dataset collected from the New York Grand Central Station. <s> BIB013
There has been a series of attempts to incorporate the research findings of crowd simulation to automatic crowded scenes analysis BIB002 . Several physics-inspired crowd models have been utilized for the purposes of recognition and classification BIB007 , BIB008 , BIB005 , BIB003 , BIB012 . To analyze at the macroscopic level, usually holistic properties of the scene are modeled. Assuming that high density crowd behaves like a complex dynamic system, many dynamical crowd evolution models have been proposed BIB007 - BIB008 , BIB009 . The concepts of motion field and dynamical potential borrowed from fluid dynamics community were utilized BIB001 . The motion field is a rich dynamical descriptor of the flow which can be related to the velocity of flow, while the potential accounts for several physical quantities such as the density or the pressure in the flow. Although people do not always follow the laws of physics, they have choices in their direction, have no conservation of momentum and can stop and start at willing , the coupling of crowd dynamics and real data has exhibited promising results in crowd video analysis and opened a rich area of research. At the microscopic level, agent-based models have also been popular in video analysis. They analyze the stimuli, or driven factors of crowd behavior, based on the assumption that crowd behavior originates from the interaction of its elementary individuals. Mehran et al. BIB005 and Zhao et al. BIB010 applied the SFM to detect the abnormal events of the crowd. Zhou et al. BIB011 , BIB013 used dynamic pedestrian-agent model to learn the collective behavior patterns of pedestrians in crowded scenes. It is noted that cues from the two levels could be jointly used. For example, BIB006 , BIB004 employ the global motion information to improve tracking individuals in a crowd scene. Also the microscopic information of individual movements can be used as basic units in the holistic scene models. Besides, visual feature extraction, object tracking, learning, and other related algorithms from vision area also play important roles in crowded scene analysis. They will be given brief introductions in method descriptions.
Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> This text develops and applies the techniques used to solve problems in fluid mechanics on computers and describes in detail those most often used in practice. It includes advanced techniques in computational fluid dynamics, such as direct and large-eddy simulation of turbulence, multigrid methods, parallel computing, moving grids, structured, block-structured and unstructured boundary-fitted grids, and free surface flows. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> A new method for the visualization of two-dimensional fluid flow is presented. The method is based on the advection and decay of dye. These processes are simulated by defining each frame of a flow animation as a blend between a warped version of the previous image and a number of background images. For the latter a sequence of filtered white noise images is used: filtered in time and space to remove high frequency components. Because all steps are done using images, the method is named Image Based Flow Visualization ( IBFV ). With IBFV a wide variety of visualization techniques can be emulated. Flow can be visualized as moving textures with line integral convolution and spot noise. Arrow plots, streamlines, particles, and topological images can be generated by adding extra dye to the image. Unsteady flows, defined on arbitrary meshes, can be handled. IBFV achieves a high performance by using standard features of graphics hardware. Typically fifty frames per second are generated using standard graphics cards on PCs. Finally, IBFV is easy to understand, analyse, and implement. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> This paper presents a review of current research in the area of real-time crowd simulation. Crowd simulation has many diverse uses, for example in safety modelling, entertainment software, architecture and urban modelling applications. We describe three main approaches to the problem - fluid-based, cellular automata and particle-based, concentrating on the latter. Finally, we describe CrowdSim, a simple but eective implementation of some of the techniques. In this paper, we present a review of research into the topic of real-time crowd simulation. We also describe the implementation of CrowdSim, a 2-dimensional simulation based upon some of the ideas presented in the paper. We define a crowd to be a collection of pedestrians occupying a common area and with varying degrees of interaction with each other. It is fair to ask the question of why we would want to model crowds. The following are some common reasons: • Safety simulation - crowd simulation has been successfully used to model the flow of pedestrians in emergency situations, for example exiting a building on fire ([Helbing et al. 2000]). • Architectural simulation - crowd models can be used to test the suitability of building designs and to enhance the realism of pre-construction models. • Urban modelling - realistic pedestrians can add to the realism of virtual tourism ([Ulicny and Thalmann 2002]) and can be an aid to town planning. • Entertainment software - many modern computer games create virtual worlds for players to inhabit and these are enhanced by the realistic simulation of pedestrians. • It’s an interesting problem - apart from anything else, the modelling of pedestrians is an interesting mathematical problem. The topic is a large one and it’s therefore been necessary to restrict detail in some areas. In particular, we have chosen to focus less on visualisation, as much of the work is equally applicable in 2-dimensional applications as it is in <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> We study an energy functional for computing optical flow that combines three assumptions: a brightness constancy assumption, a gradient constancy assumption, and a discontinuity-preserving spatio-temporal smoothness constraint. In order to allow for large displacements, linearisations in the two data terms are strictly avoided. We present a consistent numerical scheme based on two nested fixed point iterations. By proving that this scheme implements a coarse-to-fine warping strategy, we give a theoretical foundation for warping which has been used on a mainly experimental basis so far. Our evaluation demonstrates that the novel method gives significantly smaller angular errors than previous techniques for optical flow estimation. We show that it is fairly insensitive to parameter variations, and we demonstrate its excellent robustness under noise. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> This paper develops the theory and computation of Lagrangian Coherent Structures (LCS), which are defined as ridges of Finite-Time Lyapunov Exponent (FTLE) fields. These ridges can be seen as finite-time mixing templates. Such a framework is common in dynamical systems theory for autonomous and time-periodic systems, in which examples of LCS are stable and unstable manifolds of fixed points and periodic orbits. The concepts defined in this paper remain applicable to flows with arbitrary time dependence and, in particular, to flows that are only defined (computed or measured) over a finite interval of time. Previous work has demonstrated the usefulness of FTLE fields and the associated LCSs for revealing the Lagrangian behavior of systems with general time dependence. However, ridges of the FTLE field need not be exactly advected with ::: the flow. The main result of this paper is an estimate for the flux across an LCS, which shows that the flux is small, and in most cases negligible, for well-defined LCSs or those that rotate at a speed comparable to the local Eulerian velocity field, and are computed from FTLE fields with a sufficiently long integration time. Under these hypotheses, the structures represent nearly invariant manifolds even in systems with arbitrary time dependence. ::: The results are illustrated on three examples. The first is a simplified dynamical model of a double-gyre flow. The second is surface current data collected by high-frequency radar stations along the coast of Florida and the third is unsteady separation over an airfoil. In all cases, the existence of LCSs governs the transport and it is verified numerically that the flux of particles through these distinguished lines is indeed negligible. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> This paper proposes a framework in which Lagrangian particle dynamics is used for the segmentation of high density crowd flows and detection of flow instabilities. For this purpose, a flow field generated by a moving crowd is treated as an aperiodic dynamical system. A grid of particles is overlaid on the flow field, and is advected using a numerical integration scheme. The evolution of particles through the flow is tracked using a flow map, whose spatial gradients are subsequently used to setup a Cauchy Green deformation tensor for quantifying the amount by which the neighboring particles have diverged over the length of the integration. The maximum eigenvalue of the tensor is used to construct a finite time Lyapunov exponent (FTLE) field, which reveals the Lagrangian coherent structures (LCS) present in the underlying flow. The LCS divide flow into regions of qualitatively different dynamics and are used to locate boundaries of the flow segments in a normalized cuts framework. Any change in the number of flow segments over time is regarded as an instability, which is detected by establishing correspondences between flow segments over time. The experiments are conducted on a challenging set of videos taken from Google Video and a National Geographic documentary. <s> BIB006 </s> Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> Learning typical motion patterns or activities from videos of crowded scenes is an important visual surveillance problem. To detect typical motion patterns in crowded scenarios, we propose a new method which utilizes the instantaneous motions of a video, i.e, the motion flow field, instead of long-term motion tracks. The motion flow field is a union of independent flow vectors computed in different frames. Detecting motion patterns in this flow field can therefore be formulated as a clustering problem of the motion flow fields, where each motion pattern consists of a group of flow vectors participating in the same process or motion. We first construct a directed neighborhood graph to measure the closeness of flow vectors. A hierarchical agglomerative clustering algorithm is applied to group flow vectors into desired motion patterns. <s> BIB007 </s> Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> Learning dominant motion patterns or activities from a video is an important surveillance problem, especially in crowded environments like markets, subways etc., where tracking of individual objects is hard if not impossible. In this paper, we propose an algorithm that uses instantaneous motion field of the video instead of long-term motion tracks for learning the motion patterns. The motion field is a collection of independent flow vectors detected in each frame of the video where each flow is vector is associated with a spatial location. A motion pattern is then defined as a group of flow vectors that are part of the same physical process or motion pattern. Algorithmically, this is accomplished by first detecting the representative modes (sinks) of the motion patterns, followed by construction of super tracks, which are the collective representation of the discovered motion patterns. We also use the super tracks for event-based video matching. The efficacy of the approach is demonstrated on challenging real-world sequences. <s> BIB008 </s> Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> In this paper we introduce a novel method to detect and localize abnormal behaviors in crowd videos using Social Force model. For this purpose, a grid of particles is placed over the image and it is advected with the space-time average of optical flow. By treating the moving particles as individuals, their interaction forces are estimated using social force model. The interaction force is then mapped into the image plane to obtain Force Flow for every pixel in every frame. Randomly selected spatio-temporal volumes of Force Flow are used to model the normal behavior of the crowd. We classify frames as normal and abnormal by using a bag of words approach. The regions of anomalies in the abnormal frames are localized using interaction forces. The experiments are conducted on a publicly available dataset from University of Minnesota for escape panic scenarios and a challenging dataset of crowd videos taken from the web. The experiments show that the proposed method captures the dynamics of the crowd behavior successfully. In addition, we have shown that the social force approach outperforms similar approaches based on pure optical flow. <s> BIB009 </s> Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> A novel method for crowd flow modeling and anomaly detection is proposed for both coherent and incoherent scenes. The novelty is revealed in three aspects. First, it is a unique utilization of particle trajectories for modeling crowded scenes, in which we propose new and efficient representative trajectories for modeling arbitrarily complicated crowd flows. Second, chaotic dynamics are introduced into the crowd context to characterize complicated crowd motions by regulating a set of chaotic invariant features, which are reliably computed and used for detecting anomalies. Third, a probabilistic framework for anomaly detection and localization is formulated. The overall work-flow begins with particle advection based on optical flow. Then particle trajectories are clustered to obtain representative trajectories for a crowd flow. Next, the chaotic dynamics of all representative trajectories are extracted and quantified using chaotic invariants known as maximal Lyapunov exponent and correlation dimension. Probabilistic model is learned from these chaotic feature set, and finally, a maximum likelihood estimation criterion is adopted to identify a query video of a scene as normal or abnormal. Furthermore, an effective anomaly localization algorithm is designed to locate the position and size of an anomaly. Experiments are conducted on known crowd data set, and results show that our method achieves higher accuracy in anomaly detection and can effectively localize anomalies. <s> BIB010 </s> Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> Based on the Lagrangian framework for fluid dynamics, a streakline representation of flowis presented to solve computer vision problems involving crowd and traffic flow. Streaklines are traced in a fluid flow by injecting color material, such as smoke or dye, which is transported with the flow and used for visualization. In the context of computer vision, streaklines may be used in a similar way to transport information about a scene, and they are obtained by repeatedly initializing a fixed grid of particles at each frame, then moving both current and past particles using optical flow. Streaklines are the locus of points that connect particles which originated from the same initial position. In this paper, a streakline technique is developed to compute several important aspects of a scene, such as flow and potential functions using the Helmholtz decomposition theorem. This leads to a representation of the flow that more accurately recognizes spatial and temporal changes in the scene, compared with other commonly used flow representations. Applications of the technique to segmentation and behavior analysis provide comparison to previously employed techniques, showing that the streakline method outperforms the state-of-theart in segmentation, and opening a new domain of application for crowd analysis based on potentials. <s> BIB011 </s> Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> Efficient analysis of human behavior in video surveillance scenes is a very challenging problem. Most traditional approaches fail when applied in real conditions and contexts like amounts of persons, appearance ambiguity, and occlusion. In this work, we propose to deal with this problem by modeling the global motion information obtained from optical flow vectors. The obtained direction and magnitude models learn the dominantmotion orientations and magnitudes at each spatial location of the scene and are used to detect the major motion patterns. The applied region-based segmentation algorithm groups local blocks that share the same motion direction and speed and allows a subregion of the scene to appear in different patterns. The second part of the approach consists in the detection of events related to groups of people which are merge, split, walk, run, local dispersion, and evacuation by analyzing the instantaneous optical flow vectors and comparing the learned models. The approach is validated and experimented on standard datasets of the computer vision community. The qualitative and quantitative results are discussed <s> BIB012 </s> Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> This paper presents a novel method to extract dominant motion patterns (MPs) and the main entry/exit areas from a surveillance video. The method first computes motion histograms for each pixel and then converts it into orientation distribution functions (ODFs). Given these ODFs, a novel particle meta-tracking procedure is launched which produces meta-tracks, i.e. particle trajectories. As opposed to conventional tracking which focuses on individual moving objects, meta-tracking uses particles to follow the dominant flow of the traffic. In a last step, a novel method is used to simultaneously identify the main entry/exit areas and recover the predominant MPs. The meta-tracking procedure is a unique way to connect low-level motion features to long-range MPs. This kind of tracking is inspired by brain fiber tractography which has long been used to find dominant connections in the brain. Our method is fast, simple to implement, and works both on sparse and extremely crowded scenes. It also works on highly structured scenes (highways, traffic-light corners, etc.) as well as on chaotic scenes. <s> BIB013 </s> Crowded Scene Analysis: A Survey <s> A. Flow-Based Features <s> Flow segmentation based on similar motion patterns in crowded scenes remains an open problem in computer vision due to inherent complexity and vast diversity found in such scenes. To solve this problem, the streakline framework based on Lagrangian fluid dynamics had been proposed recently. However, this framework computed optical flow field using conventional optical flow method (Lucas Kanade method) which has poor anti-interference performance, and serious deviation would be brought to the computation of optical flow field. Moreover, our experimental results show that using the formulation of streak flow similarity in this framework can result in incorrect flow segmentation. Therefore, we combine this framework with a high accurate variational model, and modify the corresponding formulation of streak flow similarity after analyzing the streakline framework in detail. Finally, an improved method is proposed to solve flow segmentation in crowded scenes. Experiments are done to compare these two methods and results verify the validity and accuracy of our method. <s> BIB014
In the context of high density crowded scenes, tracking a person or an object is always a difficult task, and sometimes unfeasible. Fortunately, when we look at crowd, we care about what is happening, not who is doing it. The specific actions of individual pedestrians may appear relatively random, but the overall look of the crowd can still be convincing BIB003 . For that reason, several optical-flow alike features have been presented in recent years BIB010 - BIB011 , BIB009 , BIB007 - BIB014 , these methods avoid tracking from the macroscopic level and have achieved some success in addressing complex crowd flows in the scenes. 1) Optical Flow: Optical flow is to compute pixel-wise instantaneous motion between consecutive frames BIB004 . Optical flow is robust to multiple and simultaneous camera and object motions, and it is widely used in crowd motion detection and segmentation , BIB013 - BIB012 , BIB007 , BIB008 . However, optical flow does not capture long-range temporal dependencies, and can not represent spatial and temporal properties of a flow. These properties can be useful for many applications. 2) Particle Flow: Recently, based on the Lagrangian framework of fluid dynamics BIB005 , a notion of particle flow was introduced in computer vision BIB010 , BIB006 , BIB009 . Particle flow is computed by moving a grid of particles with the optical flow through numerical integration, providing trajectories that relate a particles initial position to its position at a later time. Impressive results employing particle flow have been demonstrated on crowd segmentation BIB006 and abnormal crowd behavior detection BIB010 , BIB009 . However, in particle flow the spatial changes are ignored, and time delay is significant. 3) Streak Flow: In order to achieve an accurate representation of the flow from crowd motion, Mehran et al. BIB011 introduced the notion of streakline to compute the motion field for crowd video scene analysis, referred to as streak flow. They also provided the comparison between optical flow, particle flow and streak flow with discussion. Streaklines are well known in flow visualization BIB002 and fluid mechanics BIB001 as a tool for measurement and analysis of the flow. It encapsulates motion information of the flow for a period of time. This resembles particle flow where the advection of a grid of particles provides information for segmenting the crowd motion. Streak flow exhibits changes in the flow faster than particle flow, and therefore captures crowd motions better in a dynamically changing flow. Fig. 4 gives an example of the optical flow feature. It also shows a comparison of optical flow, particle flow and streak flow using a locally uniform flow field changing over time.
Crowded Scene Analysis: A Survey <s> B. Local Spatio-Temporal Features <s> Extremely crowded scenes present unique challenges to video analysis that cannot be addressed with conventional approaches. We present a novel statistical framework for modeling the local spatio-temporal motion pattern behavior of extremely crowded scenes. Our key insight is to exploit the dense activity of the crowded scene by modeling the rich motion patterns in local areas, effectively capturing the underlying intrinsic structure they form in the video. In other words, we model the motion variation of local space-time volumes and their spatial-temporal statistical behaviors to characterize the overall behavior of the scene. We demonstrate that by capturing the steady-state motion behavior with these spatio-temporal motion pattern models, we can naturally detect unusual activity as statistical deviations. Our experiments show that local spatio-temporal motion pattern modeling offers promising results in real-world scenes with complex activities that are hard for even human observers to analyze. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> B. Local Spatio-Temporal Features <s> Tracking pedestrians is a vital component of many computer vision applications, including surveillance, scene understanding, and behavior analysis. Videos of crowded scenes present significant challenges to tracking due to the large number of pedestrians and the frequent partial occlusions that they produce. The movement of each pedestrian, however, contributes to the overall crowd motion (i.e., the collective motions of the scene's constituents over the entire video) that exhibits an underlying spatially and temporally varying structured pattern. In this paper, we present a novel Bayesian framework for tracking pedestrians in videos of crowded scenes using a space-time model of the crowd motion. We represent the crowd motion with a collection of hidden Markov models trained on local spatio-temporal motion patterns, i.e., the motion patterns exhibited by pedestrians as they move through local space-time regions of the video. Using this unique representation, we predict the next local spatio-temporal motion pattern a tracked pedestrian will exhibit based on the observed frames of the video. We then use this prediction as a prior for tracking the movement of an individual in videos of extremely crowded scenes. We show that our approach of leveraging the crowd motion enables tracking in videos of complex scenes that present unique difficulty to other approaches. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> B. Local Spatio-Temporal Features <s> This paper presents a novel method to extract dominant motion patterns (MPs) and the main entry/exit areas from a surveillance video. The method first computes motion histograms for each pixel and then converts it into orientation distribution functions (ODFs). Given these ODFs, a novel particle meta-tracking procedure is launched which produces meta-tracks, i.e. particle trajectories. As opposed to conventional tracking which focuses on individual moving objects, meta-tracking uses particles to follow the dominant flow of the traffic. In a last step, a novel method is used to simultaneously identify the main entry/exit areas and recover the predominant MPs. The meta-tracking procedure is a unique way to connect low-level motion features to long-range MPs. This kind of tracking is inspired by brain fiber tractography which has long been used to find dominant connections in the brain. Our method is fast, simple to implement, and works both on sparse and extremely crowded scenes. It also works on highly structured scenes (highways, traffic-light corners, etc.) as well as on chaotic scenes. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> B. Local Spatio-Temporal Features <s> We propose to detect abnormal events via a sparse reconstruction over the normal bases. Given a collection of normal training examples, e.g., an image sequence or a collection of local spatio-temporal patches, we propose the sparse reconstruction cost (SRC) over the normal dictionary to measure the normalness of the testing sample. By introducing the prior weight of each basis during sparse reconstruction, the proposed SRC is more robust compared to other outlier detection criteria. To condense the over-completed normal bases into a compact dictionary, a novel dictionary selection method with group sparsity constraint is designed, which can be solved by standard convex optimization. Observing that the group sparsity also implies a low rank structure, we reformulate the problem using matrix decomposition, which can handle large scale training samples by reducing the memory requirement at each iteration from O(k^2) to O(k) where k is the number of samples. We use the columnwise coordinate descent to solve the matrix decomposition represented formulation, which empirically leads to a similar solution to the group sparsity formulation. By designing different types of spatio-temporal basis, our method can detect both local and global abnormal events. Meanwhile, as it does not rely on object detection and tracking, it can be applied to crowded video scenes. By updating the dictionary incrementally, our method can be easily extended to online event detection. Experiments on three benchmark datasets and the comparison to the state-of-the-art methods validate the advantages of our method. <s> BIB004
Some extremely crowded scenes, though similar in density, are less structural due to the high variability of pedestrian movements. The motion within each local area may be nonuniform and generated by any number of moving objects. In such circumstances, even a fine-grain representation, such as optical flow, would not provide enough motion information. One solution is to exploit the dense local motion patterns created by the subjects, and model their spatio-temporal relationships to represent the underlying intrinsic structure they form in the video BIB002 . The related methods generally consider the motion as a whole, and characterize its spatio-temporal distributions based on local 2D patches or 3D cubes, such as spatio-temporal gradients BIB002 , BIB001 , and histogram functions BIB003 , BIB004 . 1) Spatio-Temporal Gradients: The distribution of spatiotemporal gradients has been utilized as the base representation BIB002 , BIB001 . For each pixel i in patch I, the spatio-temporal gradient ∇I i is calculated as where x, y and t are the video's horizontal, vertical, and temporal dimensions, respectively. The 3D gradients of each pixel collectively represent the characteristic motion pattern within the patch. By capturing the steady-state motion behavior with the spatio-temporal motion pattern models, Kratz et al. BIB002 , BIB001 demonstrated that unusual activities can be detected as statistical deviations naturally. 2) Motion Histogram: Motion histograms can be considered as a kind of motion information defined on local regions. Fig. 5 illustrates the motion histograms calculated from three sample pixels. In fact it originally is ill-suited for crowd motion analysis, since computing motion orientation on a motion histogram is not only time-consuming but also errorprone due to the aligning problem. Therefore, researchers have developed some more advanced features based on motion histogram BIB003 , BIB004 . In the below brief descriptions are given. Jodoin et al. BIB003 proposed a feature called orientation distribution function (ODF). It is the probability density function of a given motion orientation. Opposed to motion histograms, ODFs have no information on the magnitude of the flow. This makes the ODF representation simpler (1D instead of 2D), and is a key advantage in computation for the upcoming motion pattern learning. Cong et al. BIB004 proposed a novel feature descriptor called multi-scale histogram of optical flow (MHOF). It preserves not only the motion information, but also the spatial contextual information. For event representation, features of all types of bases with various spatial structures are concatenated as MHOF. After estimating the motion field by optical flow, they partitioned the image into a few basic units, i.e., 2D image patches or 3D spatio-temporal cubes, then extracted MHOF from each unit. Overall, spatio-temporal features have shown particular promise in motion understanding due to their strong descriptive power, and therefore, they have been widely used as in various tasks such as crowd anomaly detection.
Crowded Scene Analysis: A Survey <s> (a) and (d). <s> This paper proposes a framework in which Lagrangian particle dynamics is used for the segmentation of high density crowd flows and detection of flow instabilities. For this purpose, a flow field generated by a moving crowd is treated as an aperiodic dynamical system. A grid of particles is overlaid on the flow field, and is advected using a numerical integration scheme. The evolution of particles through the flow is tracked using a flow map, whose spatial gradients are subsequently used to setup a Cauchy Green deformation tensor for quantifying the amount by which the neighboring particles have diverged over the length of the integration. The maximum eigenvalue of the tensor is used to construct a finite time Lyapunov exponent (FTLE) field, which reveals the Lagrangian coherent structures (LCS) present in the underlying flow. The LCS divide flow into regions of qualitatively different dynamics and are used to locate boundaries of the flow segments in a normalized cuts framework. Any change in the number of flow segments over time is regarded as an instability, which is detected by establishing correspondences between flow segments over time. The experiments are conducted on a challenging set of videos taken from Google Video and a National Geographic documentary. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> (a) and (d). <s> In surveillance situations, computer vision systems are often deployed to help humans perform their tasks more effectively. In a typical installation human observers are required to simultaneously monitor a number of video signals. Psychophysical research indicates that there are severe limitations in the ability of humans to monitor simultaneous signals. Do these same limitations extend to surveillance? We present a method for evaluating human surveillance performance in a situation that mimics demands of real world surveillance. A single computer monitor contained either nine display cells or four display cells. Each cell contained a stream of 2 to 4 moving objects. Observers were instructed to signal when a target event occurred - - when one of the objects entered a small square ldquoforbiddenrdquo region in the center of the display. Target events could occur individually or in groups of 2 or 3 temporally close events. The results indicate that the observers missed many targets (60%) when required to monitor 9 displays and many fewer when monitoring 4 displays (20%). Further, there were costs associated with target events occurring in close temporal succession. Understanding these limitations would help computer visions researchers to design algorithms and human-machine interfaces that result in improved overall performance. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> (a) and (d). <s> In the year 1999 the world population reached 6 billion, doubling the previous census estimate of 1960. Recently, the United States Census Bureau issued a revised forecast for world population showing a projected growth to 9.4 billion by 2050 (US Census Bureau, http://www.census.gov/ipc/www/worldpop.html). Different research disci- plines have studied the crowd phenomenon and its dynamics from a social, psychological and computational standpoint respectively. This paper presents a survey on crowd analysis methods employed in computer vision research and discusses perspectives from other research disciplines and how they can contribute to the computer vision approach. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> (a) and (d). <s> Efficient analysis of human behavior in video surveillance scenes is a very challenging problem. Most traditional approaches fail when applied in real conditions and contexts like amounts of persons, appearance ambiguity, and occlusion. In this work, we propose to deal with this problem by modeling the global motion information obtained from optical flow vectors. The obtained direction and magnitude models learn the dominantmotion orientations and magnitudes at each spatial location of the scene and are used to detect the major motion patterns. The applied region-based segmentation algorithm groups local blocks that share the same motion direction and speed and allows a subregion of the scene to appear in different patterns. The second part of the approach consists in the detection of events related to groups of people which are merge, split, walk, run, local dispersion, and evacuation by analyzing the instantaneous optical flow vectors and comparing the learned models. The approach is validated and experimented on standard datasets of the computer vision community. The qualitative and quantitative results are discussed <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> (a) and (d). <s> We present a novel method for the discovery and statistical representation of motion patterns in a scene observed by a static camera. Related methods involving learning of patterns of activity rely on trajectories obtained from object detection and tracking systems, which are unreliable in complex scenes of crowded motion. We propose a mixture model representation of salient patterns of optical flow, and present an algorithm for learning these patterns from dense optical flow in a hierarchical, unsupervised fashion. Using low level cues of noisy optical flow, K-means is employed to initialize a Gaussian mixture model for temporally segmented clips of video. The components of this mixture are then filtered and instances of motion patterns are computed using a simple motion model, by linking components across space and time. Motion patterns are then initialized and membership of instances in different motion patterns is established by using KL divergence between mixture distributions of pattern instances. Finally, a pixel level representation of motion patterns is proposed by deriving conditional expectation of optical flow. Results of extensive experiments are presented for multiple surveillance sequences containing numerous patterns involving both pedestrian and vehicular traffic. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> (a) and (d). <s> A novel method for crowd flow modeling and anomaly detection is proposed for both coherent and incoherent scenes. The novelty is revealed in three aspects. First, it is a unique utilization of particle trajectories for modeling crowded scenes, in which we propose new and efficient representative trajectories for modeling arbitrarily complicated crowd flows. Second, chaotic dynamics are introduced into the crowd context to characterize complicated crowd motions by regulating a set of chaotic invariant features, which are reliably computed and used for detecting anomalies. Third, a probabilistic framework for anomaly detection and localization is formulated. The overall work-flow begins with particle advection based on optical flow. Then particle trajectories are clustered to obtain representative trajectories for a crowd flow. Next, the chaotic dynamics of all representative trajectories are extracted and quantified using chaotic invariants known as maximal Lyapunov exponent and correlation dimension. Probabilistic model is learned from these chaotic feature set, and finally, a maximum likelihood estimation criterion is adopted to identify a query video of a scene as normal or abnormal. Furthermore, an effective anomaly localization algorithm is designed to locate the position and size of an anomaly. Experiments are conducted on known crowd data set, and results show that our method achieves higher accuracy in anomaly detection and can effectively localize anomalies. <s> BIB006
As a solution, non-tracking methods have been proposed. Similar to our method, particle advection methods have been shown successful to handle dense videos BIB004 BIB002 BIB003 . However, some of these methods were designed to detect local anomalies BIB003 and thus are not suited to recover long-range MPs. Alternatively, Ali and Shah BIB002 proposes a scene segmentation method based on particle advection which can recover long-range MPs. Unfortunately, that method was not meant to deal with overlapping MPs (as in Fig. 2 (c) and (d)) since each pixel is assigned only one motion label. Solmaz et al. BIB004 use particles to recognize among five crowd behaviors based on the eigenvalues of a flow-based Jacobian matrix. It is not clear though how this method can recover individual MPs, especially if their shape differs from the five predefined behaviors. Work by Hu, Ali and Shah BIB002 BIB005 BIB006 is to our knowledge the closest contribution to ours. Given flow vectors, they find motion paths with a tracking method which they call sink seeking. Sink seeking leads to sink paths which are then clustered into super tracks. However, since their method relies on pixel-based flow vectors, the tracking method cannot recover overlapping MPs which are frequent in crossroads and in Y-shape roads (results are reported in section 4). From the same lab BIB001 , a scene segmentation method based on a motion flow field but without particles has been proposed. Motion flow vectors are clustered together with a hierarchical clustering based on a geodesic converts motion histograms into ODFs, 3) performs metatracking and 4) clusters meta-tracks to recover MPs as well as the entry/exit points.
Crowded Scene Analysis: A Survey <s> Motion Histograms <s> We present an approach for online learning of discriminative appearance models for robust multi-target tracking in a crowded scene from a single camera. Although much progress has been made in developing methods for optimal data association, there has been comparatively less work on the appearance models, which are key elements for good performance. Many previous methods either use simple features such as color histograms, or focus on the discriminability between a target and the background which does not resolve ambiguities between the different targets. We propose an algorithm for learning a discriminative appearance model for different targets. Training samples are collected online from tracklets within a time sliding window based on some spatial-temporal constraints; this allows the models to adapt to target instances. Learning uses an Ad-aBoost algorithm that combines effective image descriptors and their corresponding similarity measurements. We term the learned models as OLDAMs. Our evaluations indicate that OLDAMs have significantly higher discrimination between different targets than conventional holistic color histograms, and when integrated into a hierarchical association framework, they help improve the tracking accuracy, particularly reducing the false alarms and identity switches. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> Motion Histograms <s> In this paper, a Random Field Topic (RFT) model is proposed for semantic region analysis from motions of objects in crowded scenes. Different from existing approaches of learning semantic regions either from optical flows or from complete trajectories, our model assumes that fragments of trajectories (called tracklets) are observed in crowded scenes. It advances the existing Latent Dirichlet Allocation topic model, by integrating the Markov random fields (MR-F) as prior to enforce the spatial and temporal coherence between tracklets during the learning process. Two kinds of MRF, pairwise MRF and the forest of randomly spanning trees, are defined. Another contribution of this model is to include sources and sinks as high-level semantic prior, which effectively improves the learning of semantic regions and the clustering of tracklets. Experiments on a large scale data set, which includes 40, 000+ tracklets collected from the crowded New York Grand Central station, show that our model outperforms state-of-the-art methods both on qualitative results of learning semantic regions and on quantitative results of clustering tracklets. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> Motion Histograms <s> This paper addresses the problem of multi-target tracking in crowded scenes from a single camera. We propose an algorithm for learning discriminative appearance models for different targets. These appearance models are based on covariance descriptor extracted from tracklets given by a short-term tracking algorithm. Short-term tracking relies on object descriptors tuned by a controller which copes with context variation over time. We link tracklets by using discriminative analysis on a Riemannian manifold. Our evaluation shows that by applying this discriminative analysis, we can reduce false alarms and identity switches, not only for tracking in a single camera but also for matching object appearances between non-overlapping cameras. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> Motion Histograms <s> In this paper, a new Mixture model of Dynamic pedestrian-Agents (MDA) is proposed to learn the collective behavior patterns of pedestrians in crowded scenes. Collective behaviors characterize the intrinsic dynamics of the crowd. From the agent-based modeling, each pedestrian in the crowd is driven by a dynamic pedestrian-agent, which is a linear dynamic system with its initial and termination states reflecting a pedestrian's belief of the starting point and the destination. Then the whole crowd is modeled as a mixture of dynamic pedestrian-agents. Once the model is unsupervisedly learned from real data, MDA can simulate the crowd behaviors. Furthermore, MDA can well infer the past behaviors and predict the future behaviors of pedestrians given their trajectories only partially observed, and classify different pedestrian behaviors in the scene. The effectiveness of MDA and its applications are demonstrated by qualitative and quantitative experiments on the video surveillance dataset collected from the New York Grand Central Station. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> Motion Histograms <s> Crowded scene analysis is currently a hot and challenging topic in computer vision field. The ability to analyze motion patterns from videos is a difficult, but critical part of this problem. In this paper, we propose a novel approach for the analysis of motion patterns by clustering the tracklets using an unsupervised hierarchical clustering algorithm, where the similarity between tracklets is measured by the Longest Common Subsequences. The tracklets are obtained by tracking dense points under three effective rules, therefore enabling it to capture the motion patterns in crowded scenes. The analysis of motion patterns is implemented in a completely unsupervised way, and the tracklets are clustered automatically through hierarchical clustering algorithm based on a graphic model. To validate the performance of our approach, we conducted experimental evaluations on two datasets. The results reveal the precise distributions of motion patterns in current crowded videos and demonstrate the effectiveness of our approach. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> Motion Histograms <s> Crowded scene analysis is becoming increasingly popular in computer vision field. In this paper, we propose a novel approach to analyze motion patterns by clustering the hybrid generative-discriminative feature maps using unsupervised hierarchical clustering algorithm. The hybrid generative-discriminative feature maps are derived by posterior divergence based on the tracklets which are captured by tracking dense points with three effective rules. The feature maps effectively associate low-level features with the semantical motion patterns by exploiting the hidden information in crowded scenes. Motion pattern analyzing is implemented in a completely unsupervised way and the feature maps are clustered automatically through hierarchical clustering algorithm building on the basis of graphic model. The experiment results precisely reveal the distributions of motion patterns in current crowded videos and demonstrate the effectiveness of our approach. <s> BIB006
Consider I, a video with constant inter-frame interval, and (u t , v t ) the optical flow at time t computed by a motion estimation method. In this work, we use the Horn/SchunkLucas/Kanade method proposed by Bruhn et al. which we found to be a good compromise between precision and speed. Although ODFs (to be defined later) could be obtained with tracklets as in , we empirically found that optical flow is much easier and faster to compute. Once optical flow has been computed for each frame, a motion histogram M p (u, v) is computed at each pixel p. This histogram contains the number of times a pixel p had its motion vector (u t,p , v t,p ) equal to (u, v) during the entire video. Since M p takes integer indices, motion vectors (u t,p , v t,p ) are rounded up to the nearest integer. Fig. 4 shows three pixel-based motion histograms. Comparing with the the other two types of feature representations, trajectory/tracklet is more semantic and it seems to be attractive. However, as mentioned previously, traditional pipeline of object detection and the subsequent tracking of those detections can hardly perform accurate object detection and tracking, as the density of the crowd increases and the scene clutter becomes severe BIB002 . Considering the difficulties in obtaining complete trajectories, a motion feature called tracklet has been proposed. A tracklet is a fragment of a trajectory obtained by the tracker within a short period. Tracklets terminate when ambiguities caused by occlusions or scene clutters arise. They are more conservative and less likely to drift than long trajectories BIB002 . In previous works BIB003 - BIB001 , tracklets have been mainly used to be connected into complete trajectories for tracking or human action recognition. Recently, several tracklet based approaches for learning semantic regions and clustering trajectories BIB002 , BIB004 , BIB005 , BIB006 were proposed. In the approaches, tracklets are often extracted from dense feature points and then a certain model is applied to enforce the spatial and temporal coherence between tracklets, to finally detect behavior patterns of pedestrians in crowded scenes.
Crowded Scene Analysis: A Survey <s> IV. CROWD MOTION PATTERN SEGMENTATION <s> Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> IV. CROWD MOTION PATTERN SEGMENTATION <s> Efficient analysis of human behavior in video surveillance scenes is a very challenging problem. Most traditional approaches fail when applied in real conditions and contexts like amounts of persons, appearance ambiguity, and occlusion. In this work, we propose to deal with this problem by modeling the global motion information obtained from optical flow vectors. The obtained direction and magnitude models learn the dominantmotion orientations and magnitudes at each spatial location of the scene and are used to detect the major motion patterns. The applied region-based segmentation algorithm groups local blocks that share the same motion direction and speed and allows a subregion of the scene to appear in different patterns. The second part of the approach consists in the detection of events related to groups of people which are merge, split, walk, run, local dispersion, and evacuation by analyzing the instantaneous optical flow vectors and comparing the learned models. The approach is validated and experimented on standard datasets of the computer vision community. The qualitative and quantitative results are discussed <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> IV. CROWD MOTION PATTERN SEGMENTATION <s> We present a novel method for the discovery and statistical representation of motion patterns in a scene observed by a static camera. Related methods involving learning of patterns of activity rely on trajectories obtained from object detection and tracking systems, which are unreliable in complex scenes of crowded motion. We propose a mixture model representation of salient patterns of optical flow, and present an algorithm for learning these patterns from dense optical flow in a hierarchical, unsupervised fashion. Using low level cues of noisy optical flow, K-means is employed to initialize a Gaussian mixture model for temporally segmented clips of video. The components of this mixture are then filtered and instances of motion patterns are computed using a simple motion model, by linking components across space and time. Motion patterns are then initialized and membership of instances in different motion patterns is established by using KL divergence between mixture distributions of pattern instances. Finally, a pixel level representation of motion patterns is proposed by deriving conditional expectation of optical flow. Results of extensive experiments are presented for multiple surveillance sequences containing numerous patterns involving both pedestrian and vehicular traffic. <s> BIB003
Motion patterns learning is important in automated visual surveillance BIB001 , , BIB002 . In crowded scene analysis, it is highly desirable to analyze the motion patterns and obtain some high-level interpretation. The term motion pattern here refers to a spatial region of the scene that has a high degree of local similarity of the speed, as well as flow direction within the region and otherwise outside BIB003 . Motion patterns not only describe the segmentation in the spatial space, but also reflect the motion tendency in a period. These patterns can be joint or disjoint in the image space. They usually have a semantic level interpretation and contain sources and sinks of the paths described by the patterns. To analyze motion patterns in crowded scenes, various methods have been proposed. According to the principle to segment or cluster the motions, these methods can be divided into three categories: flow field model based segmentation, similarity based clustering, and probability model based clustering. The first category tries to simulate the image spatial segmentation based on flow field models, and therefore tends to produce spatially continuous segments. The later two categories utilize various well-developed clustering algorithms, usually based on local motion features, e.g., tacklets or motion video words. The resulting segments may be scattered, but they could be applicable to unstructured scenes with complex motions.
Crowded Scene Analysis: A Survey <s> A. Flow Field Model Based Segmentation <s> We study an energy functional for computing optical flow that combines three assumptions: a brightness constancy assumption, a gradient constancy assumption, and a discontinuity-preserving spatio-temporal smoothness constraint. In order to allow for large displacements, linearisations in the two data terms are strictly avoided. We present a consistent numerical scheme based on two nested fixed point iterations. By proving that this scheme implements a coarse-to-fine warping strategy, we give a theoretical foundation for warping which has been used on a mainly experimental basis so far. Our evaluation demonstrates that the novel method gives significantly smaller angular errors than previous techniques for optical flow estimation. We show that it is fairly insensitive to parameter variations, and we demonstrate its excellent robustness under noise. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> A. Flow Field Model Based Segmentation <s> This paper develops the theory and computation of Lagrangian Coherent Structures (LCS), which are defined as ridges of Finite-Time Lyapunov Exponent (FTLE) fields. These ridges can be seen as finite-time mixing templates. Such a framework is common in dynamical systems theory for autonomous and time-periodic systems, in which examples of LCS are stable and unstable manifolds of fixed points and periodic orbits. The concepts defined in this paper remain applicable to flows with arbitrary time dependence and, in particular, to flows that are only defined (computed or measured) over a finite interval of time. Previous work has demonstrated the usefulness of FTLE fields and the associated LCSs for revealing the Lagrangian behavior of systems with general time dependence. However, ridges of the FTLE field need not be exactly advected with ::: the flow. The main result of this paper is an estimate for the flux across an LCS, which shows that the flux is small, and in most cases negligible, for well-defined LCSs or those that rotate at a speed comparable to the local Eulerian velocity field, and are computed from FTLE fields with a sufficiently long integration time. Under these hypotheses, the structures represent nearly invariant manifolds even in systems with arbitrary time dependence. ::: The results are illustrated on three examples. The first is a simplified dynamical model of a double-gyre flow. The second is surface current data collected by high-frequency radar stations along the coast of Florida and the third is unsteady separation over an airfoil. In all cases, the existence of LCSs governs the transport and it is verified numerically that the flux of particles through these distinguished lines is indeed negligible. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> A. Flow Field Model Based Segmentation <s> This paper proposes a framework in which Lagrangian particle dynamics is used for the segmentation of high density crowd flows and detection of flow instabilities. For this purpose, a flow field generated by a moving crowd is treated as an aperiodic dynamical system. A grid of particles is overlaid on the flow field, and is advected using a numerical integration scheme. The evolution of particles through the flow is tracked using a flow map, whose spatial gradients are subsequently used to setup a Cauchy Green deformation tensor for quantifying the amount by which the neighboring particles have diverged over the length of the integration. The maximum eigenvalue of the tensor is used to construct a finite time Lyapunov exponent (FTLE) field, which reveals the Lagrangian coherent structures (LCS) present in the underlying flow. The LCS divide flow into regions of qualitatively different dynamics and are used to locate boundaries of the flow segments in a normalized cuts framework. Any change in the number of flow segments over time is regarded as an instability, which is detected by establishing correspondences between flow segments over time. The experiments are conducted on a challenging set of videos taken from Google Video and a National Geographic documentary. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> A. Flow Field Model Based Segmentation <s> Learning typical motion patterns or activities from videos of crowded scenes is an important visual surveillance problem. To detect typical motion patterns in crowded scenarios, we propose a new method which utilizes the instantaneous motions of a video, i.e, the motion flow field, instead of long-term motion tracks. The motion flow field is a union of independent flow vectors computed in different frames. Detecting motion patterns in this flow field can therefore be formulated as a clustering problem of the motion flow fields, where each motion pattern consists of a group of flow vectors participating in the same process or motion. We first construct a directed neighborhood graph to measure the closeness of flow vectors. A hierarchical agglomerative clustering algorithm is applied to group flow vectors into desired motion patterns. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> A. Flow Field Model Based Segmentation <s> Learning dominant motion patterns or activities from a video is an important surveillance problem, especially in crowded environments like markets, subways etc., where tracking of individual objects is hard if not impossible. In this paper, we propose an algorithm that uses instantaneous motion field of the video instead of long-term motion tracks for learning the motion patterns. The motion field is a collection of independent flow vectors detected in each frame of the video where each flow is vector is associated with a spatial location. A motion pattern is then defined as a group of flow vectors that are part of the same physical process or motion pattern. Algorithmically, this is accomplished by first detecting the representative modes (sinks) of the motion patterns, followed by construction of super tracks, which are the collective representation of the discovered motion patterns. We also use the super tracks for event-based video matching. The efficacy of the approach is demonstrated on challenging real-world sequences. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> A. Flow Field Model Based Segmentation <s> Computer vision algorithms have played a pivotal role in commercial video surveillance systems for a number of years. However, a common weakness among these systems is their inability to handle crowded scenes. In this thesis, we have developed algorithms that overcome some of the challenges encountered in videos of crowded environments such as sporting events, religious festivals, parades, concerts, train stations, airports, and malls. We adopt a top-down approach by first performing a global-level analysis that locates dynamically distinct crowd regions within the video. This knowledge is then employed in the detection of abnormal behaviors and tracking of individual targets within crowds. In addition, the thesis explores the utility of contextual information necessary for persistent tracking and re-acquisition of objects in crowded scenes. ::: For the global-level analysis, a framework based on Lagrangian Particle Dynamics is proposed to segment the scene into dynamically distinct crowd regions or groupings. For this purpose, the spatial extent of the video is treated as a phase space of a time-dependent dynamical system in which transport from one region of the phase space to another is controlled by the optical flow. Next, a grid of particles is advected forward in time through the phase space using a numerical integration to generate a "flow map". The flow map relates the initial positions of particles to their final positions. The spatial gradients of the flow map are used to compute a Cauchy Green Deformation tensor that quantifies the amount by which the neighboring particles diverge over the length of the integration. The maximum eigenvalue of the tensor is used to construct a forward Finite Time Lyapunov Exponent (FTLE) field that reveals the Attracting Lagrangian Coherent Structures (LCS). The same process is repeated by advecting the particles backward in time to obtain a backward FTLE field that reveals the repelling LCS. The attracting and repelling LCS are the time dependent invariant manifolds of the phase space and correspond to the boundaries between dynamically distinct crowd flows. The forward and backward FTLE fields are combined to obtain one scalar field that is segmented using a watershed segmentation algorithm to obtain the labeling of distinct crowd-flow segments. Next, abnormal behaviors within the crowd are localized by detecting changes in the number of crowd-flow segments over time. ::: Next, the global-level knowledge of the scene generated by the crowd-flow segmentation is used as an auxiliary source of information for tracking an individual target within a crowd. This is achieved by developing a scene structure-based force model. This force model captures the notion that an individual, when moving in a particular scene, is subjected to global and local forces that are functions of the layout of that scene and the locomotive behavior of other individuals in his or her vicinity. The key ingredients of the force model are three floor fields that are inspired by research in the field of evacuation dynamics; namely, Static Floor Field (SFF), Dynamic Floor Field (DFF), and Boundary Floor Field (BFF). These fields determine the probability of moving from one location to the next by converting the long-range forces into local forces. The SFF specifies regions of the scene that are attractive in nature, such as an exit location. The DFF, which is based on the idea of active walker models, corresponds to the virtual traces created by the movements of nearby individuals in the scene. The BFF specifies influences exhibited by the barriers within the scene, such as walls and no-entry areas. By combining influence from all three fields with the available appearance information, we are able to track individuals in high-density crowds. The results are reported on real-world sequences of marathons and railway stations that contain thousands of people. A comparative analysis with respect to an appearance-based mean shift tracker is also conducted by generating the ground truth. The result of this analysis demonstrates the benefit of using floor fields in crowded scenes. ::: The occurrence of occlusion is very frequent in crowded scenes due to a high number of interacting objects. To overcome this challenge, we propose an algorithm that has been developed to augment a generic tracking algorithm to perform persistent tracking in crowded environments. The algorithm exploits the contextual knowledge, which is divided into two categories consisting of motion context (MC) and appearance context (AC). The MC is a collection of trajectories that are representative of the motion of the occluded or unobserved object. These trajectories belong to other moving individuals in a given environment. The MC is constructed using a clustering scheme based on the Lyapunov Characteristic Exponent (LCE), which measures the mean exponential rate of convergence or divergence of the nearby trajectories in a given state space. Next, the MC is used to predict the location of the occluded or unobserved object in a regression framework. It is important to note that the LCE is used for measuring divergence between a pair of particles while the FTLE field is obtained by computing the LCE for a grid of particles. The appearance context (AC) of a target object consists of its own appearance history and appearance information of the other objects that are occluded. The intent is to make the appearance descriptor of the target object more discriminative with respect to other unobserved objects, thereby reducing the possible confusion between the unobserved objects upon re-acquisition. This is achieved by learning the distribution of the intra-class variation of each occluded object using all of its previous observations. In addition, a distribution of inter-class variation for each target-unobservable object pair is constructed. Finally, the re-acquisition decision is made using both the MC and the AC. <s> BIB006 </s> Crowded Scene Analysis: A Survey <s> A. Flow Field Model Based Segmentation <s> Based on the Lagrangian framework for fluid dynamics, a streakline representation of flowis presented to solve computer vision problems involving crowd and traffic flow. Streaklines are traced in a fluid flow by injecting color material, such as smoke or dye, which is transported with the flow and used for visualization. In the context of computer vision, streaklines may be used in a similar way to transport information about a scene, and they are obtained by repeatedly initializing a fixed grid of particles at each frame, then moving both current and past particles using optical flow. Streaklines are the locus of points that connect particles which originated from the same initial position. In this paper, a streakline technique is developed to compute several important aspects of a scene, such as flow and potential functions using the Helmholtz decomposition theorem. This leads to a representation of the flow that more accurately recognizes spatial and temporal changes in the scene, compared with other commonly used flow representations. Applications of the technique to segmentation and behavior analysis provide comparison to previously employed techniques, showing that the streakline method outperforms the state-of-theart in segmentation, and opening a new domain of application for crowd analysis based on potentials. <s> BIB007 </s> Crowded Scene Analysis: A Survey <s> A. Flow Field Model Based Segmentation <s> In this paper, we propose a crowd motion partitioning approach based on local-translational motion approximation in a scattered motion field. To represent crowd motion in an accurate and parsimonious way, we compute optical flow at the salient locations instead of at all the pixel locations. We then transform the problem of crowd motion partitioning into a problem of scattered motion field segmentation. Based on our assumption that local crowd motion can be approximated by a translational motion field, we develop a local-translation domain segmentation (LTDS) model in which the evolution of domain boundaries is derived from the Gateaux derivative of an objective functional and further extend LTDS to the case of scattered motion field. The experiment results on a set of synthetic vector fields and a set of videos depicting real-world crowd scenes indicate that the proposed approach is effective in identifying the homogeneous crowd motion components under different scenarios. <s> BIB008 </s> Crowded Scene Analysis: A Survey <s> A. Flow Field Model Based Segmentation <s> This paper presents a novel method to extract dominant motion patterns (MPs) and the main entry/exit areas from a surveillance video. The method first computes motion histograms for each pixel and then converts it into orientation distribution functions (ODFs). Given these ODFs, a novel particle meta-tracking procedure is launched which produces meta-tracks, i.e. particle trajectories. As opposed to conventional tracking which focuses on individual moving objects, meta-tracking uses particles to follow the dominant flow of the traffic. In a last step, a novel method is used to simultaneously identify the main entry/exit areas and recover the predominant MPs. The meta-tracking procedure is a unique way to connect low-level motion features to long-range MPs. This kind of tracking is inspired by brain fiber tractography which has long been used to find dominant connections in the brain. Our method is fast, simple to implement, and works both on sparse and extremely crowded scenes. It also works on highly structured scenes (highways, traffic-light corners, etc.) as well as on chaotic scenes. <s> BIB009 </s> Crowded Scene Analysis: A Survey <s> A. Flow Field Model Based Segmentation <s> Flow segmentation based on similar motion patterns in crowded scenes remains an open problem in computer vision due to inherent complexity and vast diversity found in such scenes. To solve this problem, the streakline framework based on Lagrangian fluid dynamics had been proposed recently. However, this framework computed optical flow field using conventional optical flow method (Lucas Kanade method) which has poor anti-interference performance, and serious deviation would be brought to the computation of optical flow field. Moreover, our experimental results show that using the formulation of streak flow similarity in this framework can result in incorrect flow segmentation. Therefore, we combine this framework with a high accurate variational model, and modify the corresponding formulation of streak flow similarity after analyzing the streakline framework in detail. Finally, an improved method is proposed to solve flow segmentation in crowded scenes. Experiments are done to compare these two methods and results verify the validity and accuracy of our method. <s> BIB010
Among many physical-based models applied in crowd analysis, flow field models BIB003 , BIB007 , BIB009 , BIB004 - BIB010 , BIB008 are well studied in crowd motion pattern segmentation. By treating a moving crowd as a time dependent flow field consisting of regions with qualitatively different dynamics, the motion patterns emerging from spatio-temporal interactions of the participants can be reflected. Based on this kind of representations, methods such as edge-based segmentation, graph-based segmentation, watershed segmentation can be applied. Ali et al. BIB003 proposed the Lagrangian particle dynamic to segment high density crowd flows. To uncover the spatial organization of the flow field, clouds of particles generated by the crowd motion are examined. Then Lagrangian coherent structures (LCS) BIB002 is utilized to map to the boundaries of different crowd segments. Similar to the edges for image, the LCSs for flow data can be used to segment flow regions of different dynamics. The proposed method of BIB003 could reveal the underlying flow structures of velocity field, and it is insensitive to the scene density. However, slow motions might not be segmented out, and low crowd density might cause over-segmented. To detect typical motion patterns in crowded scenes, Hu et al. BIB004 constructed a directed neighborhood graph to measure the closeness of motion flow vectors, and then grouped them into motion patterns. Based on the same idea, Hu et al. BIB005 later invented a method to learn dominant motion patterns in videos. This is accomplished by first detecting the representative modes (sinks) of motion patterns, followed by construction of the super tracks, i.e., collective representations of the discovered motion patterns. The methods do not require complete trajectories, avoiding the problem of occlusion. But they are not applicable to unstructured scenes and the number of motion patterns needs to be predefined. Another approach is local-translational domain segmentation (LTDS) model proposed in Wu et al. BIB008 . Local crowd motion is approximated as a translational motion field, and the evolution of domain boundaries is derived from the Gâteaux derivative of an objective function. To represent crowd motion in an accurate and efficient way, optical flow is computed at salient locations instead of all the pixel locations. Then the problem of crowd motion partitioning is transformed to scattered motion field segmentation. This method can automatically determine the number of groups and can be applied to both medium and high density crowded scenes. As introduced previously in section III, the streakline framework BIB007 can recognize spatio-temporal flow changes more quickly than other methods. However, this framework computes the optical flow field using the conventional method. It has poor anti-interference performance, and serious deviation would be brought to the computation of the optical flow field. In BIB010 , Wang et al. improved the streakline framework with a highly accurate variational model BIB001 . Different motion patterns are separated in crowded scenes by computing the similarity of streaklines and streak flows using watershed segmentation. This method finds a balance between recognition of local spatial changes and filling spatial gaps in the flow. But it must be noted that, the streak flow computation is vulnerable to disturbance, which may result in incorrect segmentation. Flow field model based segmentation has shown success in handling high density scenes with complex crowd flows. However, the optical flow alike features are designed to detect local changes, not to recover long-range motion patterns. Alternatively, the particle flow methods used in BIB007 , BIB009 , BIB006 treat the crowd as a single entity, and ignore the spatial changes in coherent motion patterns. Another disadvantage of the methods in BIB006 , BIB004 , BIB005 is that they can not handle overlapping motion patterns, since each pixel is assigned to only one motion label. It is often not the case in unstructured scenes. In addition, with the decrease in crowd density, flow field model would no longer work, and cause the video scene to be over-segmented.
Crowded Scene Analysis: A Survey <s> B. Similarity Based Clustering <s> We start by def ining conven t ions and t e rmino logy that will be used th roughou t this paper . String C = clc~ ... cp is a subsequence of string A = aja2 "'" am if there is a mapp ing F : {1, 2 . . . . , p} ~ {1, 2, ... , m} such that F(i) = k only if c~ = ak and F is a m o n o t o n e strictly increasing funct ion (i .e. F(i) = u, F(]) = v, and i < j imply that u < v). C can be fo rmed by delet ing m p (not necessari ly ad jacen t ) symbols f rom A . F o r example , " c o u r s e " is a subsequence of " c o m p u t e r sc ience . " Str ing C is a c o m m o n s ubs equence of strings A and B if C is a s u b s e q u e n c e of A and also a subsequence of B. String C is a longest c o m m o n subsequence (abbrev ia ted LCS) of string A and B if C is a c o m m o n subsequence of A and B of maximal length , i .e. there is no c o m m o n subsequence of A and B that has grea te r length. Th roughou t this paper , we assume that A and B are strings of lengths m and n , m _< n , that have an LCS C of (unknown) length p . We assume that the symbols that may appea r in these strings c o m e f rom some a lphabet of size t . A symbol can be s tored in m e m o r y by using log t bits, which we assume will fit in one word of memory . Symbols can be c o m p a r e d (a -< b?) in one t ime unit . The n u m b e r of di f ferent symbols that actual ly appear in string B is def ined to be s (which must be less than n and t). The longest c o m m o n s u b s e q u e n c e prob lem has been solved by using a recurs ion re la t ionship on the length of the solut ion [7, 12, 16, 21]. These are general ly appl icable a lgor i thms that take O ( m n ) t ime for any input strings o f lengths m and n even though the lower bound on t ime of O ( m n ) need not apply to all inputs [2]. We present a lgor i thms that , depend ing on the na ture of the Input, may not requ i re quadra t ic t ime to r ecove r an LCS. The first a lgor i thm is appl icable in the genera l case and requi res O ( p n + n log n) t ime. T h e second a lgor i thm requi res t ime b o u n d e d by O((m + 1 p )p log n). In the c o m m o n special case where p is close to m , this a lgor i thm takes t ime <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> B. Similarity Based Clustering <s> We discuss the problem of detecting dominant motions in dense crowds, a challenging and societally important problem. First, we survey the general literature of computer vision algorithms that deal with crowds of people, including model- and feature-based approaches to segmentation and tracking as well as algorithms that analyze general motion trends. Second, we present a system for automatically identifying dominant motions in a crowded scene. Accurately tracking individual objects in such scenes is difficult due to inter- and intra-object occlusions that cannot be easily resolved. Our approach begins by independently tracking low-level features using optical flow. While many of the feature point tracks are unreliable, we show that they can be clustered into smooth dominant motions using a distance measure for feature trajectories based on longest common subsequences. Results on real video sequences demonstrate that the approach can successfully identify both dominant and anomalous motions in crowded scenes. These fully-automatic algorithms could be easily incorporated into distributed camera networks for autonomous scene analysis. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> B. Similarity Based Clustering <s> We propose an unsupervised learning framework to infer motion patterns in videos and in turn use them to improve tracking of moving objects in sequences from static cameras. Based on tracklets, we use a manifold learning method Tensor Voting to infer the local geometric structures in (x, y) space, and embed tracklet points into (x, y, θ) space, where θ represents motion direction. In this space, points automatically form intrinsic manifold structures, each of which corresponds to a motion pattern. To define each group, a novel robustmanifold grouping algorithm is proposed. Tensor Voting is performed to provide multiple geometric cues which formulate multiple similarity kernels between any pair of points, and a spectral clustering technique is used in this multiple kernel setting. The grouping algorithm achieves better performance than state-of-the-art methods in our applications. Extracted motion patterns can then be used as a prior to improve the performance of any object tracker. It is especially useful to reduce false alarms and ID switches. Experiments are performed on challenging real-world sequences, and a quantitative analysis of the results shows the framework effectively improves state-of-the-art tracker. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> B. Similarity Based Clustering <s> Coherent motions, which describe the collective movements of individuals in crowd, widely exist in physical and biological systems. Understanding their underlying priors and detecting various coherent motion patterns from background clutters have both scientific values and a wide range of practical applications, especially for crowd motion analysis. In this paper, we propose and study a prior of coherent motion called Coherent Neighbor Invariance, which characterizes the local spatiotemporal relationships of individuals in coherent motion. Based on the coherent neighbor invariance, a general technique of detecting coherent motion patterns from noisy time-series data called Coherent Filtering is proposed. It can be effectively applied to data with different distributions at different scales in various real-world problems, where the environments could be sparse or extremely crowded with heavy noise. Experimental evaluation and comparison on synthetic and real data show the existence of Coherence Neighbor Invariance and the effectiveness of our Coherent Filtering. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> B. Similarity Based Clustering <s> Crowded scene analysis is currently a hot and challenging topic in computer vision field. The ability to analyze motion patterns from videos is a difficult, but critical part of this problem. In this paper, we propose a novel approach for the analysis of motion patterns by clustering the tracklets using an unsupervised hierarchical clustering algorithm, where the similarity between tracklets is measured by the Longest Common Subsequences. The tracklets are obtained by tracking dense points under three effective rules, therefore enabling it to capture the motion patterns in crowded scenes. The analysis of motion patterns is implemented in a completely unsupervised way, and the tracklets are clustered automatically through hierarchical clustering algorithm based on a graphic model. To validate the performance of our approach, we conducted experimental evaluations on two datasets. The results reveal the precise distributions of motion patterns in current crowded videos and demonstrate the effectiveness of our approach. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> B. Similarity Based Clustering <s> This paper presents a novel method to extract dominant motion patterns (MPs) and the main entry/exit areas from a surveillance video. The method first computes motion histograms for each pixel and then converts it into orientation distribution functions (ODFs). Given these ODFs, a novel particle meta-tracking procedure is launched which produces meta-tracks, i.e. particle trajectories. As opposed to conventional tracking which focuses on individual moving objects, meta-tracking uses particles to follow the dominant flow of the traffic. In a last step, a novel method is used to simultaneously identify the main entry/exit areas and recover the predominant MPs. The meta-tracking procedure is a unique way to connect low-level motion features to long-range MPs. This kind of tracking is inspired by brain fiber tractography which has long been used to find dominant connections in the brain. Our method is fast, simple to implement, and works both on sparse and extremely crowded scenes. It also works on highly structured scenes (highways, traffic-light corners, etc.) as well as on chaotic scenes. <s> BIB006
In this kind of methods, motion pattern segmentation is treated as a clustering problem: once motion features are detected and extracted, they are grouped into similar categories through some similarity measurements. Then, the semantic regions are estimated from the spatial extents of trajectory/tracklet clusters. The detailed descriptions are as follows. Cheriyadat et al. BIB002 used a distance measure for feature trajectories based on longest common subsequence (LCSS) BIB001 . The method begins with independently tracking low-level features using optical flow, and then clusters these tracks into smooth dominant motions. It has a great advantage of speed in measuring the similarity between all pairs of tracks. But the parameters for clustering need to be fine-tuned for different situations. In addition, feature points tracking may suffer from noise. Based on tracklets, Zhao et al. BIB003 used a manifold learning method to infer the local geometric structures in image space, and to infer the motion patterns in videos. They embedded tracklet points into (x, y, θ) space, where (x, y) stand for the image space and θ represents motion direction. In this space, points automatically form intrinsic manifold structures each corresponding to a motion pattern. Also based on tracklets, Zhou et al. BIB004 proposed a general technique of detecting coherent motion patterns in noisy timeseries data, named coherent filtering. When applying this technique to coherent motion detection in the crowd, tracklets are firstly extracted by Kanade-Lucas-Tomasi (KLT) keypoint tracker. Then a similarity measure called coherent neighbor invariance is used to characterize these tracklets and cluster them into different motion patterns. An approach similar to BIB004 was proposed in Wang et al. BIB005 , to analyze motion patterns from the tracklets in dynamical crowded scenes. Tracklets are collected by tracking dense feature points from the video of crowded scenes, and motion patterns are then learned by clustering the tracklets. In Jodoin et al. BIB006 , a method of meta-tracking has been proposed to extract dominant motion patterns and the main entry/exit areas from a surveillance video. This method relies on pixel-based orientation distributed functions (ODFs), which summarize the directions of the flows at each point of the scene. Once all pixels have been assigned to ODFs, particle trajectories are computed through an iterative algorithm. They are called "meta-tracks". Finally, based on a hierarchical clustering method, nearest meta-tracks are merged together to form the motion patterns. Compared with trajectories, local motion feature is insensitive to scene clutter and tracking errors. Through clustering or linking process, the tracklets or optical flow with common features can be properly grouped, resulting in different semantic regions. Another advantage is that local feature clustering can be used for both structured and unstructured crowded scenes, since even mutually overlapping local motion features can be well separated in the learning process.
Crowded Scene Analysis: A Survey <s> C. Probability Model Based Clustering <s> We provide evidence that nonlinear dimensionality reduction, clustering, and data set parameterization can be solved within one and the same framework. The main idea is to define a system of coordinates with an explicit metric that reflects the connectivity of a given data set and that is robust to noise. Our construction, which is based on a Markov random walk on the data, offers a general scheme of simultaneously reorganizing and subsampling graphs and arbitrarily shaped data sets in high dimensions using intrinsic geometry. We show that clustering in embedding spaces is equivalent to compressing operators. The objective of data partitioning and clustering is to coarse-grain the random walk on the data while at the same time preserving a diffusion operator for the intrinsic geometry or connectivity of the data set up to some accuracy. We show that the quantization distortion in diffusion space bounds the error of compression of the operator, thus giving a rigorous justification for k-means clustering in diffusion space and a precise measure of the performance of general clustering algorithms <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> C. Probability Model Based Clustering <s> We present a novel method for the discovery and statistical representation of motion patterns in a scene observed by a static camera. Related methods involving learning of patterns of activity rely on trajectories obtained from object detection and tracking systems, which are unreliable in complex scenes of crowded motion. We propose a mixture model representation of salient patterns of optical flow, and present an algorithm for learning these patterns from dense optical flow in a hierarchical, unsupervised fashion. Using low level cues of noisy optical flow, K-means is employed to initialize a Gaussian mixture model for temporally segmented clips of video. The components of this mixture are then filtered and instances of motion patterns are computed using a simple motion model, by linking components across space and time. Motion patterns are then initialized and membership of instances in different motion patterns is established by using KL divergence between mixture distributions of pattern instances. Finally, a pixel level representation of motion patterns is proposed by deriving conditional expectation of optical flow. Results of extensive experiments are presented for multiple surveillance sequences containing numerous patterns involving both pedestrian and vehicular traffic. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> C. Probability Model Based Clustering <s> In this paper, a Random Field Topic (RFT) model is proposed for semantic region analysis from motions of objects in crowded scenes. Different from existing approaches of learning semantic regions either from optical flows or from complete trajectories, our model assumes that fragments of trajectories (called tracklets) are observed in crowded scenes. It advances the existing Latent Dirichlet Allocation topic model, by integrating the Markov random fields (MR-F) as prior to enforce the spatial and temporal coherence between tracklets during the learning process. Two kinds of MRF, pairwise MRF and the forest of randomly spanning trees, are defined. Another contribution of this model is to include sources and sinks as high-level semantic prior, which effectively improves the learning of semantic regions and the clustering of tracklets. Experiments on a large scale data set, which includes 40, 000+ tracklets collected from the crowded New York Grand Central station, show that our model outperforms state-of-the-art methods both on qualitative results of learning semantic regions and on quantitative results of clustering tracklets. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> C. Probability Model Based Clustering <s> Our work addresses the problem of analyzing and understanding dynamic video scenes. A two-level motion pattern mining approach is proposed. At the first level, single-agent motion patterns are modeled as distributions over pixel-based features. At the second level, interaction patterns are modeled as distributions over single-agent motion patterns. Both patterns are shared among video clips. Compared to other works, the advantage of our method is that interaction patterns are detected and assigned to every video frame. This enables a finer semantic interpretation and more precise anomaly detection. Specifically, every video frame is labeled by a certain interaction pattern and moving pixels in each frame which do not belong to any singleagent pattern or cannot exist in the corresponding interaction pattern are detected as anomalies. We have tested our approach on a challenging traffic surveillance sequence containing both pedestrian and vehicular motions and obtained promising results. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> C. Probability Model Based Clustering <s> In this paper, a new Mixture model of Dynamic pedestrian-Agents (MDA) is proposed to learn the collective behavior patterns of pedestrians in crowded scenes. Collective behaviors characterize the intrinsic dynamics of the crowd. From the agent-based modeling, each pedestrian in the crowd is driven by a dynamic pedestrian-agent, which is a linear dynamic system with its initial and termination states reflecting a pedestrian's belief of the starting point and the destination. Then the whole crowd is modeled as a mixture of dynamic pedestrian-agents. Once the model is unsupervisedly learned from real data, MDA can simulate the crowd behaviors. Furthermore, MDA can well infer the past behaviors and predict the future behaviors of pedestrians given their trajectories only partially observed, and classify different pedestrian behaviors in the scene. The effectiveness of MDA and its applications are demonstrated by qualitative and quantitative experiments on the video surveillance dataset collected from the New York Grand Central Station. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> C. Probability Model Based Clustering <s> With the proliferation of cameras in public areas, it becomes increasingly desirable to develop fully automated surveillance and monitoring systems. In this paper, we propose a novel unsupervised approach to automatically explore motion patterns occurring in dynamic scenes under an improved sparse topical coding (STC) framework. Given an input video with a fixed camera, we first segment the whole video into a sequence of clips (documents) without overlapping. Optical flow features are extracted from each pair of consecutive frames, and quantized into discrete visual words. Then the video is represented by a word-document hierarchical topic model through a generative process. Finally, an improved sparse topical coding approach is proposed for model learning. The semantic motion patterns (latent topics) are learned automatically and each video clip is represented as a weighted summation of these patterns with only a few nonzero coefficients. The proposed approach is purely data-driven and scene independent (not an object-class specific), which make it suitable for very large range of scenarios. Experiments demonstrate that our approach outperforms the state-of-the art technologies in dynamic scene analysis. <s> BIB006
Being widely utilized in visual clustering, probability Bayesian models can also be adopted here for crowd motion pattern segmentation. Low-level motion features to be clustered are fitted with the designed models. Popular models in vision area, such as Gaussian mixture model (GMM), random field topic (RFT), and latent Dirichlet allocation (LDA) have been applied. In contrast to the simple averaging optical flow methods, the use of an probability model allows for longterm analysis of a scene. Moreover, it can capture both the overlapping behaviors at any given location in a scene and the spatial dependencies between behaviors. Finally, the statistical model can incorporate a priori knowledge on where, when and what types of activities occur. Yang et al. proposed a novel method to automatically discover key motion patterns in a scene by observing the scene for an extended period. Firstly low-level motion features are extracted through computing optical flow. These motion features are then quantized into video words based on their direction and location. Next, some video words are screened out based on the entropy over all clips for a given word. The key motion patterns are discovered using diffusion maps embedding BIB001 and clustering. For the same purpose, Saleemi et al. BIB002 introduced a statistical model for motion patterns representation based on raw optical flow. The method is based on hierarchical problemspecific learning. GMM is exploited as a co-occurrence free measure of spatio-temporal proximity and flow similarity between features. Finally, a pixel-level representation of motion patterns is proposed by deriving conditional expectation of optical flow. The RFT model has been applied in semantic region analysis in crowded scenes in Zhou et al. BIB003 , BIB005 , based on the motions of objects. In the approach, a tracklet is treated as a document, and observations (points) on tracklets are quantized into words according to a codebook based on their locations and velocity directions. In addition, Markov Random Fields (MRF) is used as a prior to enforce the spatial and temporal coherence between tracklets during the learning process. The MRF model encourages tracklets spatially and temporally close to have similar distributions over semantic regions. Each semantic region has its preferred source and sink. Therefore, activities observed in the same semantic region have similar semantic interpretations. The LDA model has also been adopted BIB004 . Assuming that motion patterns involved in a complex dynamic scene usually have a hierarchical nature, a two-level motion pattern mining approach has been proposed. At the first level, single-agent motion patterns are modeled as distributions over pixel-based features. At the second level, interaction patterns are modeled as distributions over single-agent motion patterns. Then, LDA is applied to discover both single-agent motion patterns and interaction patterns in the video. Moreover, Fu et al. BIB006 extracted optical flow features from each pair of consecutive frames, and quantized them into discrete visual words. The video is represented by a word-document hierarchical topic model through a generative process, and an improved sparse topical coding approach is used for model learning. An advantage of probability models is that it can provide much more compact representations than directly clustering high dimensional motion feature vectors computed from video clips. Furthermore, it models the spatio-temporal interrelationships among different events at the global scene level, which can facilitate the crowd behavior understanding.
Crowded Scene Analysis: A Survey <s> D. Experiments <s> We investigate techniques for analysis and retrieval of object trajectories in two or three dimensional space. Such data usually contain a large amount of noise, that has made previously used metrics fail. Therefore, we formalize non-metric similarity functions based on the longest common subsequence (LCSS), which are very robust to noise and furthermore provide an intuitive notion of similarity between trajectories by giving more weight to similar portions of the sequences. Stretching of sequences in time is allowed, as well as global translation of the sequences in space. Efficient approximate algorithms that compute these similarity measures are also provided. We compare these new methods to the widely used Euclidean and time warping distance functions (for real and synthetic data) and show the superiority of our approach, especially in the strong presence of noise. We prove a weaker version of the triangle inequality and employ it in an indexing structure to answer nearest neighbor queries. Finally, we present experimental results that validate the accuracy and efficiency of our approach. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> D. Experiments <s> This paper proposes a framework in which Lagrangian particle dynamics is used for the segmentation of high density crowd flows and detection of flow instabilities. For this purpose, a flow field generated by a moving crowd is treated as an aperiodic dynamical system. A grid of particles is overlaid on the flow field, and is advected using a numerical integration scheme. The evolution of particles through the flow is tracked using a flow map, whose spatial gradients are subsequently used to setup a Cauchy Green deformation tensor for quantifying the amount by which the neighboring particles have diverged over the length of the integration. The maximum eigenvalue of the tensor is used to construct a finite time Lyapunov exponent (FTLE) field, which reveals the Lagrangian coherent structures (LCS) present in the underlying flow. The LCS divide flow into regions of qualitatively different dynamics and are used to locate boundaries of the flow segments in a normalized cuts framework. Any change in the number of flow segments over time is regarded as an instability, which is detected by establishing correspondences between flow segments over time. The experiments are conducted on a challenging set of videos taken from Google Video and a National Geographic documentary. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> D. Experiments <s> We discuss the problem of detecting dominant motions in dense crowds, a challenging and societally important problem. First, we survey the general literature of computer vision algorithms that deal with crowds of people, including model- and feature-based approaches to segmentation and tracking as well as algorithms that analyze general motion trends. Second, we present a system for automatically identifying dominant motions in a crowded scene. Accurately tracking individual objects in such scenes is difficult due to inter- and intra-object occlusions that cannot be easily resolved. Our approach begins by independently tracking low-level features using optical flow. While many of the feature point tracks are unreliable, we show that they can be clustered into smooth dominant motions using a distance measure for feature trajectories based on longest common subsequences. Results on real video sequences demonstrate that the approach can successfully identify both dominant and anomalous motions in crowded scenes. These fully-automatic algorithms could be easily incorporated into distributed camera networks for autonomous scene analysis. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> D. Experiments <s> Based on the Lagrangian framework for fluid dynamics, a streakline representation of flowis presented to solve computer vision problems involving crowd and traffic flow. Streaklines are traced in a fluid flow by injecting color material, such as smoke or dye, which is transported with the flow and used for visualization. In the context of computer vision, streaklines may be used in a similar way to transport information about a scene, and they are obtained by repeatedly initializing a fixed grid of particles at each frame, then moving both current and past particles using optical flow. Streaklines are the locus of points that connect particles which originated from the same initial position. In this paper, a streakline technique is developed to compute several important aspects of a scene, such as flow and potential functions using the Helmholtz decomposition theorem. This leads to a representation of the flow that more accurately recognizes spatial and temporal changes in the scene, compared with other commonly used flow representations. Applications of the technique to segmentation and behavior analysis provide comparison to previously employed techniques, showing that the streakline method outperforms the state-of-theart in segmentation, and opening a new domain of application for crowd analysis based on potentials. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> D. Experiments <s> In this paper, a Random Field Topic (RFT) model is proposed for semantic region analysis from motions of objects in crowded scenes. Different from existing approaches of learning semantic regions either from optical flows or from complete trajectories, our model assumes that fragments of trajectories (called tracklets) are observed in crowded scenes. It advances the existing Latent Dirichlet Allocation topic model, by integrating the Markov random fields (MR-F) as prior to enforce the spatial and temporal coherence between tracklets during the learning process. Two kinds of MRF, pairwise MRF and the forest of randomly spanning trees, are defined. Another contribution of this model is to include sources and sinks as high-level semantic prior, which effectively improves the learning of semantic regions and the clustering of tracklets. Experiments on a large scale data set, which includes 40, 000+ tracklets collected from the crowded New York Grand Central station, show that our model outperforms state-of-the-art methods both on qualitative results of learning semantic regions and on quantitative results of clustering tracklets. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> D. Experiments <s> Video surveillance is always a hot topic in computer vision. With the public safe issue received more and more attention, analysis for crowd motion is becoming significant, and detecting motion patterns or activities in crowded scenes from videos is one of the major problem in crowd analysis. This paper proposes a new method for learning the motion patterns in crowded scenes. We add the direction information to the motion vectors, and cluster the data by a density based clustering. We extract the feature points using KLT corner extractor and track them to obtain basic motion information by optical flow techniques. All the motion information in different frames forms the motion flow field. Improved DBSCAN method is used to divide the motion flow filed into different patterns. The result of the system is given as a graph with groups of vectors. The experiment result in real-world videos is presented to demonstrate our approach. <s> BIB006 </s> Crowded Scene Analysis: A Survey <s> D. Experiments <s> Coherent motions, which describe the collective movements of individuals in crowd, widely exist in physical and biological systems. Understanding their underlying priors and detecting various coherent motion patterns from background clutters have both scientific values and a wide range of practical applications, especially for crowd motion analysis. In this paper, we propose and study a prior of coherent motion called Coherent Neighbor Invariance, which characterizes the local spatiotemporal relationships of individuals in coherent motion. Based on the coherent neighbor invariance, a general technique of detecting coherent motion patterns from noisy time-series data called Coherent Filtering is proposed. It can be effectively applied to data with different distributions at different scales in various real-world problems, where the environments could be sparse or extremely crowded with heavy noise. Experimental evaluation and comparison on synthetic and real data show the existence of Coherence Neighbor Invariance and the effectiveness of our Coherent Filtering. <s> BIB007 </s> Crowded Scene Analysis: A Survey <s> D. Experiments <s> This paper presents a novel method to extract dominant motion patterns (MPs) and the main entry/exit areas from a surveillance video. The method first computes motion histograms for each pixel and then converts it into orientation distribution functions (ODFs). Given these ODFs, a novel particle meta-tracking procedure is launched which produces meta-tracks, i.e. particle trajectories. As opposed to conventional tracking which focuses on individual moving objects, meta-tracking uses particles to follow the dominant flow of the traffic. In a last step, a novel method is used to simultaneously identify the main entry/exit areas and recover the predominant MPs. The meta-tracking procedure is a unique way to connect low-level motion features to long-range MPs. This kind of tracking is inspired by brain fiber tractography which has long been used to find dominant connections in the brain. Our method is fast, simple to implement, and works both on sparse and extremely crowded scenes. It also works on highly structured scenes (highways, traffic-light corners, etc.) as well as on chaotic scenes. <s> BIB008
To give a pilot evaluation of crowd motion pattern segmentation methods, we test five representative methods on six videos standing for different challenges. These videos were taken from the UCF datasets and BIB008 , and their length ranges from 100 frames to 5000 frames. Some have a small number of moving objects (Pedestrians, Crosswalk, Roundabout) while others are highly crowded (Marathon, Mecca, Pilgrims); some have simple layout (Marathon, Mecca, Crosswalk) while others are complex (Pedestrians, Pilgrims, Roundabout); some have well structured dynamics (Marathon, Mecca) while others present fairly unstructured scenes(Pedestrians) or semistructured scenes (Pilgrims, Crosswalk, Roundabout). Fig. 6 gives the motion segmentation results of the selected five methods on six videos. It can be found that these methods produce similar results on sequences of well structured (Marathon, Mecca) and semi-structured scenes (Pilgrims, Crosswalk, Roundabout), but different results on unstructured scenes (Pedestrian). For structured scenes, all the methods well segment the areas with different motion characteristics, and each resulted region looks continuous and unified; for unstructured scenes, though the segmented patterns by particle flow with FTLE field and streak flow with watershed are still unified, the truth is that individuals in the resulted region move differently; the optical flow with DBSCAN (densitybased spatial clustering of applications with noise) obtains poor results; the coherent-filtering produces quite scattered pieces, reflecting motion patterns at a certain point in time; the segmentation results of meta-tracking look quite clutter, but they mix several motion patterns together. The meta-tracking can well handle the unstructured scenes for its representative ability for multiple motion patterns. An experiment on video clips of 100 frames length shows that the average execution times of DBSCAN BIB006 , FTLE BIB002 , watershed BIB004 , meta-tracking BIB008 and coherent-filtering BIB007 are around 6.3 seconds, 22.4 seconds, 5.6 seconds, 9.3 seconds and 0.4 seconds, respectively (CPU: i7-3770, 3.4GHz; memory: 8G). Here we do not consider the computation of motion features extraction. The difference in computation times lies in the fact that, FTLE and meta-tracking require motion information across the whole video sequence to generate only the motion pattern map, while DBSCAN, watershed and coherent-filtering can generate a motion pattern map with just two adjacent video frames in each iteration, which could be more efficient. To evaluate the motion pattern segmentation results quantitatively, we manually label the detected motion regions. The numbers of true and false detections from different methods for three representative videos (Mecca for highly crowded structured scene, Roundabout for semi-structured scene, and Pedestrian for unstructured scene) are given in Table II . Moreover, due to the noise in motion clusters, the motion detection number cannot fully reflect the performance. We use completeness as an additional measurement of the segmentation accuracy. Here, completeness is the ratio of the detected motion patterns area to the ground-truth area. The results are given in Table II, for comparison. Motion pattern segmentation can also be used for source/sink seeking and path planing. We also conduct an experiment on the New York grand central station video from CHUK dataset. The test video is 30 minutes long and contains an unstructured crowded scene. Since flow field based models and coherent-filtering can not handle the extremely stochastic crowded scenes, we only evaluate two methods in this experiment: meta-tracking BIB008 and RFT model BIB005 . From the experimental results shown in Fig. 7 , we can clearly see that some regular paths are extracted from mixed crowd motion patterns. Overall, results produced by the two methods are similar. The results from RFT look better, since the model can incorporate a prior knowledge from annotation (e.g. the number and positions of sources/sinks) in the training BIB006 ; the second row is produced by particle flow and FTLE field segmentation BIB002 ; the third row is produced by streak flow and watershed segmentation BIB004 ; the fourth row is produced by coherent-filtering BIB007 ; and the fifth row is produced by meta-tracking BIB008 . For figures in the first to fourth rows, different colors represent different motion patterns. For figures in the fifth row, both color and line continuity distinguish different motion patterns. (Best viewed in color) Metatracking Random Field Topic Fig. 7 . Part of source/sink seeking results from meta-tracking BIB008 and RFT model BIB005 . Different colors in the columns represent different motion patterns, each indicating a regular path of pedestrians. (Best viewed in color) process. The meta-tracking method finds the sources/sinks via clustering, performing the whole process automatically without human intervention. The quantitative evaluation results are shown in Table III . The number of true and false sources/sinks are used. Here, a true detection means that the algorithm finds an entry/exit point accordance with the manually labeled ground-truth; while a false detection means that the algorithm finds a wrong entry/exit point. Besides, to evaluate the generated paths, we first cluster each detected path shown in Fig. 7 into a smooth dominant trajectory using the method proposed in BIB003 , then we compute the similarity between the dominant trajectory and the labeled path. A distance measure of longest common subsequences (LCSS) BIB001 is used. The LCSS distance between path F i and F j is defined as where T i , T j are the lengths of F i , F j , respectively. LCSS(F i , F j ) specifies the number of matching points between two trajectories. A good algorithm should result in high similarity values. In Table III , we only consider the true detected paths and compute the average LCSS distance over them. Note different true detection results produced by meta-tracking BIB008 may correspond to one ground-truth label. RFT model is referred to BIB005 .
Crowded Scene Analysis: A Survey <s> E. Summary <s> This paper presents an algorithm for tracking individual targets in high density crowd scenes containing hundreds of people. Tracking in such a scene is extremely challenging due to the small number of pixels on the target, appearance ambiguity resulting from the dense packing, and severe inter-object occlusions. The novel tracking algorithm, which is outlined in this paper, will overcome these challenges using a scene structure based force model. In this force model an individual, when moving in a particular scene, is subjected to global and local forces that are functions of the layout of that scene and the locomotive behavior of other individuals in the scene. The key ingredients of the force model are three floor fields, which are inspired by the research in the field of evacuation dynamics, namely Static Floor Field (SFF), Dynamic Floor Field (DFF), and Boundary Floor Field (BFF). These fields determine the probability of move from one location to another by converting the long-range forces into local ones. The SFF specifies regions of the scene which are attractive in nature (e.g. an exit location). The DFF specifies the immediate behavior of the crowd in the vicinity of the individual being tracked. The BFF specifies influences exhibited by the barriers in the scene (e.g. walls, no-go areas). By combining cues from all three fields with the available appearance information, we track individual targets in high density crowds. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> E. Summary <s> This paper presents a target tracking framework for unstructured crowded scenes. Unstructured crowded scenes are defined as those scenes where the motion of a crowd appears to be random with different participants moving in different directions over time. This means each spatial location in such scenes supports more than one, or multi-modal, crowd behavior. The case of tracking in structured crowded scenes, where the crowd moves coherently in a common direction, and the direction of motion does not vary over time, was previously handled in [1]. In this work, we propose to model various crowd behavior (or motion) modalities at different locations of the scene by employing Correlated Topic Model (CTM) of [16]. In our construction, words correspond to low level quantized motion features and topics correspond to crowd behaviors. It is then assumed that motion at each location in an unstructured crowd scene is generated by a set of behavior proportions, where behaviors represent distributions over low-level motion features. This way any one location in the scene may support multiple crowd behavior modalities and can be used as prior information for tracking. Our approach enables us to model a diverse set of unstructured crowd domains, which range from cluttered time-lapse microscopy videos of cell populations in vitro, to footage of crowded sporting events. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> E. Summary <s> Tracking pedestrians is a vital component of many computer vision applications, including surveillance, scene understanding, and behavior analysis. Videos of crowded scenes present significant challenges to tracking due to the large number of pedestrians and the frequent partial occlusions that they produce. The movement of each pedestrian, however, contributes to the overall crowd motion (i.e., the collective motions of the scene's constituents over the entire video) that exhibits an underlying spatially and temporally varying structured pattern. In this paper, we present a novel Bayesian framework for tracking pedestrians in videos of crowded scenes using a space-time model of the crowd motion. We represent the crowd motion with a collection of hidden Markov models trained on local spatio-temporal motion patterns, i.e., the motion patterns exhibited by pedestrians as they move through local space-time regions of the video. Using this unique representation, we predict the next local spatio-temporal motion pattern a tracked pedestrian will exhibit based on the observed frames of the video. We then use this prediction as a prior for tracking the movement of an individual in videos of extremely crowded scenes. We show that our approach of leveraging the crowd motion enables tracking in videos of complex scenes that present unique difficulty to other approaches. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> E. Summary <s> This paper proposes Motion Structure Tracker (MST) to solve the problem of tracking in very crowded structured scenes. It combines visual tracking, motion pattern learning and multi-target tracking. Tracking in crowded scenes is very challenging due to hundreds of similar objects, cluttered background, small object size, and occlusions. However, structured crowded scenes exhibit clear motion pattern(s), which provides rich prior information. In MST, tracking and detection are performed jointly, and motion pattern information is integrated in both steps to enforce scene structure constraint. MST is initially used to track a single target, and further extended to solve a simplified version of the multi-target tracking problem. Experiments are performed on real-world challenging sequences, and MST gives promising results. Our method significantly outperforms several state-of-the-art methods both in terms of track ratio and accuracy. <s> BIB004
In general, flow field model based segmentation and similarity based clustering require little human intervention. Thus motion segmentation can be performed in an unsupervised way, and it is convenient in many video analysis applications. Table I summarizes the reviewed studies on crowd motion pattern segmentation, providing the information of test dataset, applicable scene and crowd density level in the experimental settings of each method. Flow field model based methods are the most studied in motion pattern segmentation. Flow field can well simulate the crowd motions, considering the individuals as particles. Similarity based clustering methods are becoming more and more popular, because in high density crowds, the local motion features such as tracklets can be obtained more easily than the complete trajectories, and they show to be more discriminative than local optical flows. Recently, probability models, especially the topic models borrowed from language processing, have been applied to capture spatial and temporal dependency. Motion patterns in crowded scenes can be interpreted hierarchically. Individual movements constitute small group motions and they further form large motion patterns. The probability topic model may be a good choice due to its capacity to discover semantic regions and explore more details within the motion patterns. The learned motion patterns can be used in a range of applications including path or source/sink seeking in crowded scenes, as we have shown in experiments. Besides, several tracking algorithms BIB002 , BIB001 , BIB003 , BIB004 , also have interests in how to learn scene-specific motion patterns to improve the effect of tracking.
Crowded Scene Analysis: A Survey <s> V. CROWD BEHAVIOR RECOGNITION <s> In this paper we introduce a novel method to detect and localize abnormal behaviors in crowd videos using Social Force model. For this purpose, a grid of particles is placed over the image and it is advected with the space-time average of optical flow. By treating the moving particles as individuals, their interaction forces are estimated using social force model. The interaction force is then mapped into the image plane to obtain Force Flow for every pixel in every frame. Randomly selected spatio-temporal volumes of Force Flow are used to model the normal behavior of the crowd. We classify frames as normal and abnormal by using a bag of words approach. The regions of anomalies in the abnormal frames are localized using interaction forces. The experiments are conducted on a publicly available dataset from University of Minnesota for escape panic scenarios and a challenging dataset of crowd videos taken from the web. The experiments show that the proposed method captures the dynamics of the crowd behavior successfully. In addition, we have shown that the social force approach outperforms similar approaches based on pure optical flow. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> V. CROWD BEHAVIOR RECOGNITION <s> This article presents a survey on crowd analysis using computer vision techniques, covering different aspects such as people tracking, crowd density estimation, event detection, validation, and simulation. It also reports how related the areas of computer vision and computer graphics should be to deal with current challenges in crowd analysis. <s> BIB002
Crowd behavior analysis has been an active research topic in simulation and graphics fields where the main goal is to create realistic crowd motions BIB001 . Relatively little effort has been spent on reliable classification and understanding of human activities in real-world crowded scenes. In general, approaches for crowd behavior analysis can be divided to "holistic" and "object-based". The former treats the crowd as a single entity, which may be suitable to structured scenes of medium or high density BIB002 , while the later treats the crowd as a collection of individuals. In holistic approaches, crowd dynamics models are usually adopted to judge the behaviors on the whole. But local behaviors in unstructured scenes can not be handled. Object-based approaches infer both the behaviors and their associated individuals.
Crowded Scene Analysis: A Survey <s> A. Holistic Approach <s> Based on the Lagrangian framework for fluid dynamics, a streakline representation of flowis presented to solve computer vision problems involving crowd and traffic flow. Streaklines are traced in a fluid flow by injecting color material, such as smoke or dye, which is transported with the flow and used for visualization. In the context of computer vision, streaklines may be used in a similar way to transport information about a scene, and they are obtained by repeatedly initializing a fixed grid of particles at each frame, then moving both current and past particles using optical flow. Streaklines are the locus of points that connect particles which originated from the same initial position. In this paper, a streakline technique is developed to compute several important aspects of a scene, such as flow and potential functions using the Helmholtz decomposition theorem. This leads to a representation of the flow that more accurately recognizes spatial and temporal changes in the scene, compared with other commonly used flow representations. Applications of the technique to segmentation and behavior analysis provide comparison to previously employed techniques, showing that the streakline method outperforms the state-of-theart in segmentation, and opening a new domain of application for crowd analysis based on potentials. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> A. Holistic Approach <s> Efficient analysis of human behavior in video surveillance scenes is a very challenging problem. Most traditional approaches fail when applied in real conditions and contexts like amounts of persons, appearance ambiguity, and occlusion. In this work, we propose to deal with this problem by modeling the global motion information obtained from optical flow vectors. The obtained direction and magnitude models learn the dominantmotion orientations and magnitudes at each spatial location of the scene and are used to detect the major motion patterns. The applied region-based segmentation algorithm groups local blocks that share the same motion direction and speed and allows a subregion of the scene to appear in different patterns. The second part of the approach consists in the detection of events related to groups of people which are merge, split, walk, run, local dispersion, and evacuation by analyzing the instantaneous optical flow vectors and comparing the learned models. The approach is validated and experimented on standard datasets of the computer vision community. The qualitative and quantitative results are discussed <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> A. Holistic Approach <s> A method is proposed for identifying five crowd behaviors (bottlenecks, fountainheads, lanes, arches, and blocking) in visual scenes. In the algorithm, a scene is overlaid by a grid of particles initializing a dynamical system defined by the optical flow. Time integration of the dynamical system provides particle trajectories that represent the motion in the scene; these trajectories are used to locate regions of interest in the scene. Linear approximation of the dynamical system provides behavior classification through the Jacobian matrix; the eigenvalues determine the dynamic stability of points in the flow and each type of stability corresponds to one of the five crowd behaviors. The eigenvalues are only considered in the regions of interest, consistent with the linear approximation and the implicated behaviors. The algorithm is repeated over sequential clips of a video in order to record changes in eigenvalues, which may imply changes in behavior. The method was tested on over 60 crowd and traffic videos. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> A. Holistic Approach <s> Over the past decades, a wide attention has been paid to crowd control and management in the intelligent video surveillance area. Among the tasks for automatic surveillance video analysis, crowd motion modeling lays a crucial foundation for numerous subsequent analysis but encounters many unsolved challenges due to occlusions among pedestrians, complicated motion patterns in crowded scenarios, etc. Addressing the unsolved challenges, the authors propose a novel spatio-temporal viscous fluid field to model crowd motion patterns by exploring both appearance of crowd behaviors and interaction among pedestrians. Large-scale crowd events are hereby recognized based on characteristics of the fluid field. First, a spatio-temporal variation matrix is proposed to measure the local fluctuation of video signals in both spatial and temporal domains. After that, eigenvalue analysis is applied on the matrix to extract the principal fluctuations resulting in an abstract fluid field. Interaction force is then explored based on shear force in viscous fluid, incorporating with the fluctuations to characterize motion properties of a crowd. The authors then construct a codebook by clustering neighboring pixels with similar spatio-temporal features, and consequently, crowd behaviors are recognized using the latent Dirichlet allocation model. The convincing results obtained from the experiments on published datasets demonstrate that the proposed method obtains high-quality results for large-scale crowd behavior perception in terms of both robustness and effectiveness. <s> BIB004
In highly crowded surveillance scenes, moving objects in the sensor range appear small or even unresolved. Very few features can be detected and extracted from an individual object. In such situations, understanding crowd behaviors without knowing the actions of individuals is often advantageous BIB003 . Assuming the crowd fluid is incompressible and irrotational, Mehran et al. BIB001 proposed a concept of potential functions, which consists of two parts: stream function and velocity function. The former provides information regarding the steady and non-divergent part of the flow, whereas the later contains information regarding the local changes in the non-curling motions. With this perspective, the potential function field is capable of discriminating lanes and divergent/convergent regions in different scenes. To detect major motion patterns and crowd events, Benabbas et al. BIB002 first clustered low-level motion features to learn the direction and magnitude models of crowds, and then used a region-based segmentation algorithm to generate different motion patterns. After that, crowd events such as merge, split, walk, run, local dispersion, and evacuation were detected by analyzing the instantaneous optical flow vectors and comparing with the learned models. Later in Solmaz et al. BIB003 , crowd behaviors including bottlenecks, fountainheads, lanes, arches, and blockings can be recognized. In this framework, a scene is represented by a grid of particles initializing a dynamical system defined by the optical flow. Time integration of the dynamical system provides particle trajectories that represent the motion in the scene. These trajectories are used to locate regions of interest in the scene. Behavior classification is obtained according to the Jacobian matrix based on linear approximation of the dynamical system. The eigenvalues are used to determine the dynamic stability of points in the flow and each type of stability corresponds to one of the five crowd behaviors. Recently, a similar spatio-temporal viscous fluid (STVF) field was adopted in Su et al. BIB004 , to model crowd motion patterns by exploring both appearance of crowd behaviors and interaction among pedestrians. In their approach, large-scale crowd behaviors are recognized based on the characteristics of the fluid field. First, a spatio-temporal variation matrix is proposed to measure the local fluctuation for specific pixels. Then, the force among pedestrians are modeled with shear force in the spatio-temporal variation fluid field. Finally, a codebook is constructed by clustering neighboring pixels with similar spatio-temporal features, and crowd behaviors are recognized using the LDA model.
Crowded Scene Analysis: A Survey <s> B. Object-Based Approach <s> We propose a novel unsupervised learning framework to model activities and interactions in crowded and complicated scenes. Hierarchical Bayesian models are used to connect three elements in visual surveillance: low-level visual features, simple "atomic" activities, and interactions. Atomic activities are modeled as distributions over low-level visual features, and multi-agent interactions are modeled as distributions over atomic activities. These models are learnt in an unsupervised way. Given a long video sequence, moving pixels are clustered into different atomic activities and short video clips are clustered into different interactions. In this paper, we propose three hierarchical Bayesian models, Latent Dirichlet Allocation (LDA) mixture model, Hierarchical Dirichlet Process (HDP) mixture model, and Dual Hierarchical Dirichlet Processes (Dual-HDP) model. They advance existing language models, such as LDA [1] and HDP [2]. Our data sets are challenging video sequences from crowded traffic scenes and train station scenes with many kinds of activities co-occurring. Without tracking and human labeling effort, our framework completes many challenging visual surveillance tasks of board interest such as: (1) discovering typical atomic activities and interactions; (2) segmenting long video sequences into different interactions; (3) segmenting motions into different activities; (4) detecting abnormality; and (5) supporting high-level queries on activities and interactions. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> B. Object-Based Approach <s> In this paper, a new Mixture model of Dynamic pedestrian-Agents (MDA) is proposed to learn the collective behavior patterns of pedestrians in crowded scenes. Collective behaviors characterize the intrinsic dynamics of the crowd. From the agent-based modeling, each pedestrian in the crowd is driven by a dynamic pedestrian-agent, which is a linear dynamic system with its initial and termination states reflecting a pedestrian's belief of the starting point and the destination. Then the whole crowd is modeled as a mixture of dynamic pedestrian-agents. Once the model is unsupervisedly learned from real data, MDA can simulate the crowd behaviors. Furthermore, MDA can well infer the past behaviors and predict the future behaviors of pedestrians given their trajectories only partially observed, and classify different pedestrian behaviors in the scene. The effectiveness of MDA and its applications are demonstrated by qualitative and quantitative experiments on the video surveillance dataset collected from the New York Grand Central Station. <s> BIB002
In unstructured crowded scenes, considering the crowd as one entity would fail to identify abnormal events that arise due to inappropriate actions of an individual. For instance, a running person in a crowd can indicate an abnormal event if the rest of the crowd are walking. Object-based methods may overcome this problem. Without considering the high crowd density, conventional approaches for behavior analysis are usually performed based on detection and segmentation of each individual. They suffer from the complexity to isolate individuals in the dense crowd. Addressing this, some researchers have extended this kind of approaches to highly crowd scenes, by utilizing low-level features and probability models instead of tracking a single object BIB002 , BIB001 . Wang et al. BIB001 used hierarchical Bayesian models to connect three elements in visual surveillance: low-level visual features, simple "atomic" activities, and interactions. Atomic local motions are classified into atomic activities if they are observed in certain semantic regions. The global behaviors of video clips are modeled based on the distributions of lowlevel visual features, and multi-agent interactions are modeled based on the distributions of atomic activities. Without labeled training data and tracking procedure, the framework fulfils many challenging visual surveillance tasks, such as segmenting motions into different activities and supporting high-level queries on activities and interactions. Later in BIB002 , Zhou et al. proposed a mixture model of dynamics pedestrian-agents (MDA) to learn the collective behavior patterns of pedestrians in crowded scenes. In the agentbased modeling, each pedestrian in the crowd is driven by a dynamic pedestrian-agent and the whole crowd is modeled as a mixture of dynamic pedestrian-agents. Once the model is learned from the training data, MDA can well infer the past behaviors, predict the future behaviors of pedestrians given their partially observed trajectories, and classify different pedestrian behaviors in the scene. However, some limitations exit, e.g., MDA assumes affine transform, and it has difficulty in representing some complex shapes.
Crowded Scene Analysis: A Survey <s> C. Summary <s> Abnormal crowd behavior detection is an important research issue in computer vision. However, complex real-life situations (e.g., severe occlusion, over-crowding, etc.) still challenge the effectiveness of previous algorithms. Recently, the methods based on spatio-temporal cuboid are popular in video analysis. To our knowledge, the spatio-temporal cuboid is always extracted randomly from a video sequence in the existing methods. The size of each cuboid and the total number of cuboids are determined empirically. The extracted features either contain the redundant information or lose a lot of important information which extremely affect the accuracy. In this paper, we propose an improved method. In our method, the spatio-temporal cuboid is no longer determined arbitrarily, but by the information contained in the video sequence. The spatio-temporal cuboid is extracted from video sequence with adaptive size. The total number of cuboids and the extracting positions can be determined automatically. Moreover, to compute the similarity between two spatio-temporal cuboids with different sizes, we design a novel data structure of codebook which is constructed as a set of two-level trees. The experiment results show that the detection rates of false positive and false negative are significantly reduced. Keywords: Codebook, latent dirichlet allocation (LDA), social force model, spatio-temporal cuboid. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> C. Summary <s> Modeling human behaviors and activity patterns for recognition or detection of special event has attracted significant research interest in recent years. Diverse methods that are abound for building intelligent vision systems aimed at scene understanding and making correct semantic inference from the observed dynamics of moving targets. Most applications are in surveillance, video content retrieval, and human-computer interfaces. This paper presents not only an update extending previous related surveys, but also a focus on contextual abnormal human behavior detection especially in video surveillance applications. The main purpose of this survey is to extensively identify existing methods and characterize the literature in a manner that brings key challenges to attention. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> C. Summary <s> This paper presents an approach for detecting suspicious events in videos by using only the video itself as the training samples for valid behaviors. These salient events are obtained in real-time by detecting anomalous spatio-temporal regions in a densely sampled video. The method codes a video as a compact set of spatio-temporal volumes, while considering the uncertainty in the codebook construction. The spatio-temporal compositions of video volumes are modeled using a probabilistic framework, which calculates their likelihood of being normal in the video. This approach can be considered as an extension of the Bag of Video words (BOV) approaches, which represent a video as an order-less distribution of video volumes. The proposed method imposes spatial and temporal constraints on the video volumes so that an inference mechanism can estimate the probability density functions of their arrangements. Anomalous events are assumed to be video arrangements with very low frequency of occurrence. The algorithm is very fast and does not employ background subtraction, motion estimation or tracking. It is also robust to spatial and temporal scale changes, as well as some deformations. Experiments were performed on four video datasets of abnormal activities in both crowded and non-crowded scenes and under difficult illumination conditions. The proposed method outperformed all other approaches based on BOV that do not account for contextual information. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> C. Summary <s> Although sliding window-based approaches have been quite successful in detecting objects in images, it is not a trivial problem to extend them to detecting events in videos. We propose to search for spatio-temporal paths for video event detection. This new formulation can accurately detect and locate video events in cluttered and crowded scenes, and is robust to camera motions. It can also well handle the scale, shape, and intra-class variations of the event. Compared to event detection using spatio-temporal sliding windows, the spatio-temporal paths correspond to the event trajectories in the video space, thus can better handle events composed by moving objects. We prove that the proposed search algorithm can achieve the global optimal solution with the lowest complexity. Experiments are conducted on realistic video datasets with different event detection tasks, such as anomaly event detection, walking person detection, and running detection. Our proposed method is compatible to different types of video features or object detectors and robust to false and missed local detections. It significantly improves the overall detection and localization accuracy over the state-of-the-art methods. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> C. Summary <s> Background subtraction is a fundamental preprocessing step in many surveillance video analysis tasks. In spite of significant efforts, however, background subtraction in crowded scenes remains challenging, especially, when a large number of foreground objects move slowly or just keep still. To address the problem, this paper proposes a selective eigenbackground method for background modeling and subtraction in crowded scenes. The contributions of our method are three-fold: First, instead of training eigenbackgrounds using the original video frames that may contain more or less foregrounds, a virtual frame construction algorithm is utilized to assemble clean background pixels from different original frames so as to construct some virtual frames as the training and update samples. This can significantly improve the purity of the trained eigenbackgrounds. Second, for a crowded scene with diversified environmental conditions (e.g., illuminations), it is difficult to use only one eigenbackground model to deal with all these variations, even using some online update strategies. Thus given several models trained offline, we utilize peak signal-to-noise ratio to adaptively choose the optimal one to initialize the online eigenbackground model. Third, to tackle the problem that not all pixels can obtain the optimal results when the reconstruction is performed at once for the whole frame, our method selects the best eigenbackground for each pixel to obtain an improved quality of the reconstructed background image. Extensive experiments on the TRECVID-SED dataset and the Road video dataset show that our method outperforms several state-of-the-art methods remarkably. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> C. Summary <s> The detection and localization of anomalous behaviors in crowded scenes is considered, and a joint detector of temporal and spatial anomalies is proposed. The proposed detector is based on a video representation that accounts for both appearance and dynamics, using a set of mixture of dynamic textures models. These models are used to implement 1) a center-surround discriminant saliency detector that produces spatial saliency scores, and 2) a model of normal behavior that is learned from training data and produces temporal saliency scores. Spatial and temporal anomaly maps are then defined at multiple spatial scales, by considering the scores of these operators at progressively larger regions of support. The multiscale scores act as potentials of a conditional random field that guarantees global consistency of the anomaly judgments. A data set of densely crowded pedestrian walkways is introduced and used to evaluate the proposed anomaly detector. Experiments on this and other data sets show that the latter achieves state-of-the-art anomaly detection results. <s> BIB006
The principle to classify crowd behavior recognition methods depends on the perspective from which we observe the crowd: a single entity or a bunch of independent individuals. Holistic approaches are to classify the whole streams of people to normal or abnormal, or recognize the predefined crowd behaviors. This kind of methods ignore individual difference, and consider all individuals in the crowd to have similar motion characteristics. Such a hypothesis allows us to analyze the crowd states of behavior from a systematic perspective. However, without information from object detection and tracking, a particular activity cannot be separated from other activities simultaneously occurring in the same stream. In contrast, object-based approaches are able to locate typical activities and interactions in the scene, detect normal and abnormal activities, and support high-level semantic queries on activities and interactions. However, these methods can not handle dense crowded scenes, where individual objects detection does not work, and the crowd dynamics in this area appears chaotic. Under such circumstance, the spatial distribution of low-level visual features is also chaotic, and subsequent clustering procedure will not work well. Table IV lists the representative crowd behavior recognition techniques, as well as their reported performances. A missing entry means that the quantitative result is not reported in the available literature. Besides, the crowd behaviors defined in different works are not the same. So, it is impossible to directly compare the performances of these methods, and each of the relevant studies has been conducted under different experimental conditions, using different data and different evaluation criteria. It is usually difficult to compare different methods objectively, since anomalies are often defined in a somewhat subjective form, sometimes according to what the algorithms can detect BIB006 . We make a brief comparison of recently developed anomaly detection techniques in Table V . These techniques are evaluated on different datasets with different criteria. It is hard to compare them directly, and this table intends to provide a quick way to understand the solutions on the whole. Methods based on knowledge from physical systems for crowd representation are convenient to apply but their capacity is limited to recognize certain patterns BIB002 . In order to overcome these limitations, data-driven learning-based models can be utilized or be combined to represent the events and structures of the scene. Among them, generative topic models seem to be promising in the interested area BIB003 , BIB001 . The topic models share a fundamental idea that "a crowded scene with its various events can be simulated as a document with mixture of topics". Characterizing unusual events by low word-topic probabilities far from existing typical topics, the topic models have the ability to automatically discover meaningful events or activities from the co-occurrences of visual words. In addition, recently, some techniques in video event detection, such as spatio-temporal path searching BIB004 and background substraction BIB005 have been extended to crowded scenes. They can be utilized for improving crowd event detection.
Crowded Scene Analysis: A Survey <s> VI. CROWD ANOMALY DETECTION <s> In this paper we introduce a novel method to detect and localize abnormal behaviors in crowd videos using Social Force model. For this purpose, a grid of particles is placed over the image and it is advected with the space-time average of optical flow. By treating the moving particles as individuals, their interaction forces are estimated using social force model. The interaction force is then mapped into the image plane to obtain Force Flow for every pixel in every frame. Randomly selected spatio-temporal volumes of Force Flow are used to model the normal behavior of the crowd. We classify frames as normal and abnormal by using a bag of words approach. The regions of anomalies in the abnormal frames are localized using interaction forces. The experiments are conducted on a publicly available dataset from University of Minnesota for escape panic scenarios and a challenging dataset of crowd videos taken from the web. The experiments show that the proposed method captures the dynamics of the crowd behavior successfully. In addition, we have shown that the social force approach outperforms similar approaches based on pure optical flow. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> VI. CROWD ANOMALY DETECTION <s> Extremely crowded scenes present unique challenges to video analysis that cannot be addressed with conventional approaches. We present a novel statistical framework for modeling the local spatio-temporal motion pattern behavior of extremely crowded scenes. Our key insight is to exploit the dense activity of the crowded scene by modeling the rich motion patterns in local areas, effectively capturing the underlying intrinsic structure they form in the video. In other words, we model the motion variation of local space-time volumes and their spatial-temporal statistical behaviors to characterize the overall behavior of the scene. We demonstrate that by capturing the steady-state motion behavior with these spatio-temporal motion pattern models, we can naturally detect unusual activity as statistical deviations. Our experiments show that local spatio-temporal motion pattern modeling offers promising results in real-world scenes with complex activities that are hard for even human observers to analyze. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> VI. CROWD ANOMALY DETECTION <s> A novel method for crowd flow modeling and anomaly detection is proposed for both coherent and incoherent scenes. The novelty is revealed in three aspects. First, it is a unique utilization of particle trajectories for modeling crowded scenes, in which we propose new and efficient representative trajectories for modeling arbitrarily complicated crowd flows. Second, chaotic dynamics are introduced into the crowd context to characterize complicated crowd motions by regulating a set of chaotic invariant features, which are reliably computed and used for detecting anomalies. Third, a probabilistic framework for anomaly detection and localization is formulated. The overall work-flow begins with particle advection based on optical flow. Then particle trajectories are clustered to obtain representative trajectories for a crowd flow. Next, the chaotic dynamics of all representative trajectories are extracted and quantified using chaotic invariants known as maximal Lyapunov exponent and correlation dimension. Probabilistic model is learned from these chaotic feature set, and finally, a maximum likelihood estimation criterion is adopted to identify a query video of a scene as normal or abnormal. Furthermore, an effective anomaly localization algorithm is designed to locate the position and size of an anomaly. Experiments are conducted on known crowd data set, and results show that our method achieves higher accuracy in anomaly detection and can effectively localize anomalies. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> VI. CROWD ANOMALY DETECTION <s> Based on the Lagrangian framework for fluid dynamics, a streakline representation of flowis presented to solve computer vision problems involving crowd and traffic flow. Streaklines are traced in a fluid flow by injecting color material, such as smoke or dye, which is transported with the flow and used for visualization. In the context of computer vision, streaklines may be used in a similar way to transport information about a scene, and they are obtained by repeatedly initializing a fixed grid of particles at each frame, then moving both current and past particles using optical flow. Streaklines are the locus of points that connect particles which originated from the same initial position. In this paper, a streakline technique is developed to compute several important aspects of a scene, such as flow and potential functions using the Helmholtz decomposition theorem. This leads to a representation of the flow that more accurately recognizes spatial and temporal changes in the scene, compared with other commonly used flow representations. Applications of the technique to segmentation and behavior analysis provide comparison to previously employed techniques, showing that the streakline method outperforms the state-of-theart in segmentation, and opening a new domain of application for crowd analysis based on potentials. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> VI. CROWD ANOMALY DETECTION <s> A novel framework for anomaly detection in crowded scenes is presented. Three properties are identified as important for the design of a localized video representation suitable for anomaly detection in such scenes: 1) joint modeling of appearance and dynamics of the scene, and the abilities to detect 2) temporal, and 3) spatial abnormalities. The model for normal crowd behavior is based on mixtures of dynamic textures and outliers under this model are labeled as anomalies. Temporal anomalies are equated to events of low-probability, while spatial anomalies are handled using discriminant saliency. An experimental evaluation is conducted with a new dataset of crowded scenes, composed of 100 video sequences and five well defined abnormality categories. The proposed representation is shown to outperform various state of the art anomaly detection techniques. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> VI. CROWD ANOMALY DETECTION <s> As surveillance becomes ubiquitous, the amount of data to be processed grows along with the demand for manpower to interpret the data. A key goal of surveillance is to detect behaviors that can be considered anomalous. As a result, an extensive body of research in automated surveillance has been developed, often with the goal of automatic detection of anomalies. Research into anomaly detection in automated surveillance covers a wide range of domains, employing a vast array of techniques. This review presents an overview of recent research approaches on the topic of anomaly detection in automated surveillance. The reviewed studies are analyzed across five aspects: surveillance target, anomaly definitions and assumptions, types of sensors used and the feature extraction processes, learning methods, and modeling algorithms. <s> BIB006 </s> Crowded Scene Analysis: A Survey <s> VI. CROWD ANOMALY DETECTION <s> We propose to detect abnormal events via a sparse reconstruction over the normal bases. Given a collection of normal training examples, e.g., an image sequence or a collection of local spatio-temporal patches, we propose the sparse reconstruction cost (SRC) over the normal dictionary to measure the normalness of the testing sample. By introducing the prior weight of each basis during sparse reconstruction, the proposed SRC is more robust compared to other outlier detection criteria. To condense the over-completed normal bases into a compact dictionary, a novel dictionary selection method with group sparsity constraint is designed, which can be solved by standard convex optimization. Observing that the group sparsity also implies a low rank structure, we reformulate the problem using matrix decomposition, which can handle large scale training samples by reducing the memory requirement at each iteration from O(k^2) to O(k) where k is the number of samples. We use the columnwise coordinate descent to solve the matrix decomposition represented formulation, which empirically leads to a similar solution to the group sparsity formulation. By designing different types of spatio-temporal basis, our method can detect both local and global abnormal events. Meanwhile, as it does not rely on object detection and tracking, it can be applied to crowded video scenes. By updating the dictionary incrementally, our method can be easily extended to online event detection. Experiments on three benchmark datasets and the comparison to the state-of-the-art methods validate the advantages of our method. <s> BIB007 </s> Crowded Scene Analysis: A Survey <s> VI. CROWD ANOMALY DETECTION <s> Video anomaly detection plays a critical role for intelligent video surveillance. We present an abnormal video event detection system that considers both spatial and temporal contexts. To characterize the video, we first perform the spatio-temporal video segmentation and then propose a new region-based descriptor called “Motion Context,” to describe both motion and appearance information of the spatio-temporal segment. For anomaly measurements, we formulate the abnormal event detection as a matching problem, which is more robust than statistic model-based methods, especially when the training dataset is of limited size. For each testing spatio-temporal segment, we search for its best match in the training dataset, and determine how normal it is using a dynamic threshold. To speed up the search process, compact random projections are also adopted. Experiments on the benchmark dataset and comparisons with the state-of-the-art methods validate the advantages of our algorithm. <s> BIB008
Anomaly detection is a key aspect of the crowded scene analysis, which has attracted much attention BIB003 - BIB004 , BIB001 , BIB002 , BIB007 , BIB005 - BIB008 . However, the problem of anomaly detection is still greatly open, and research efforts are scattered not only in approaches, but also in the interpretation of the problem, assumptions and objectives BIB006 . Crowd anomaly detection methods could be learned on different supervision levels: from data with labels of both normal and abnormal behaviors; or from a corpus of unlabeled data, assuming that most parts are normal. Depending on the scale of interest, previous studies on anomaly detection can be categorized into two classes: global anomaly detection and local anomaly detection BIB007 , namely "does the scene contain an anomaly or not?" and "where is the anomaly taking place?". Detailed descriptions will be given as follows.
Crowded Scene Analysis: A Survey <s> A. Global Anomaly Detection <s> Based on the Lagrangian framework for fluid dynamics, a streakline representation of flowis presented to solve computer vision problems involving crowd and traffic flow. Streaklines are traced in a fluid flow by injecting color material, such as smoke or dye, which is transported with the flow and used for visualization. In the context of computer vision, streaklines may be used in a similar way to transport information about a scene, and they are obtained by repeatedly initializing a fixed grid of particles at each frame, then moving both current and past particles using optical flow. Streaklines are the locus of points that connect particles which originated from the same initial position. In this paper, a streakline technique is developed to compute several important aspects of a scene, such as flow and potential functions using the Helmholtz decomposition theorem. This leads to a representation of the flow that more accurately recognizes spatial and temporal changes in the scene, compared with other commonly used flow representations. Applications of the technique to segmentation and behavior analysis provide comparison to previously employed techniques, showing that the streakline method outperforms the state-of-theart in segmentation, and opening a new domain of application for crowd analysis based on potentials. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> A. Global Anomaly Detection <s> Efficient analysis of human behavior in video surveillance scenes is a very challenging problem. Most traditional approaches fail when applied in real conditions and contexts like amounts of persons, appearance ambiguity, and occlusion. In this work, we propose to deal with this problem by modeling the global motion information obtained from optical flow vectors. The obtained direction and magnitude models learn the dominantmotion orientations and magnitudes at each spatial location of the scene and are used to detect the major motion patterns. The applied region-based segmentation algorithm groups local blocks that share the same motion direction and speed and allows a subregion of the scene to appear in different patterns. The second part of the approach consists in the detection of events related to groups of people which are merge, split, walk, run, local dispersion, and evacuation by analyzing the instantaneous optical flow vectors and comparing the learned models. The approach is validated and experimented on standard datasets of the computer vision community. The qualitative and quantitative results are discussed <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> A. Global Anomaly Detection <s> A method is proposed for identifying five crowd behaviors (bottlenecks, fountainheads, lanes, arches, and blocking) in visual scenes. In the algorithm, a scene is overlaid by a grid of particles initializing a dynamical system defined by the optical flow. Time integration of the dynamical system provides particle trajectories that represent the motion in the scene; these trajectories are used to locate regions of interest in the scene. Linear approximation of the dynamical system provides behavior classification through the Jacobian matrix; the eigenvalues determine the dynamic stability of points in the flow and each type of stability corresponds to one of the five crowd behaviors. The eigenvalues are only considered in the regions of interest, consistent with the linear approximation and the implicated behaviors. The algorithm is repeated over sequential clips of a video in order to record changes in eigenvalues, which may imply changes in behavior. The method was tested on over 60 crowd and traffic videos. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> A. Global Anomaly Detection <s> Over the past decades, a wide attention has been paid to crowd control and management in the intelligent video surveillance area. Among the tasks for automatic surveillance video analysis, crowd motion modeling lays a crucial foundation for numerous subsequent analysis but encounters many unsolved challenges due to occlusions among pedestrians, complicated motion patterns in crowded scenarios, etc. Addressing the unsolved challenges, the authors propose a novel spatio-temporal viscous fluid field to model crowd motion patterns by exploring both appearance of crowd behaviors and interaction among pedestrians. Large-scale crowd events are hereby recognized based on characteristics of the fluid field. First, a spatio-temporal variation matrix is proposed to measure the local fluctuation of video signals in both spatial and temporal domains. After that, eigenvalue analysis is applied on the matrix to extract the principal fluctuations resulting in an abstract fluid field. Interaction force is then explored based on shear force in viscous fluid, incorporating with the fluctuations to characterize motion properties of a crowd. The authors then construct a codebook by clustering neighboring pixels with similar spatio-temporal features, and consequently, crowd behaviors are recognized using the latent Dirichlet allocation model. The convincing results obtained from the experiments on published datasets demonstrate that the proposed method obtains high-quality results for large-scale crowd behavior perception in terms of both robustness and effectiveness. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> A. Global Anomaly Detection <s> Modeling human crowds is an important issue for video surveillance and is a challenging task due to their unpredictable behavior. In this paper, the position of an isolated region that comprises an individual person or a set of occluded persons is detected by background subtraction. Each isolated region is considered a vertex and a human crowd is thus modeled by a graph. To construct a graph, Delaunay triangulation is used to systematically connect vertices and therefore the problem of event detection of human crowds is formulated as measuring the topology variation of consecutive graphs in temporal order. To effectively model the topology variations, local characteristics, such as triangle deformations and eigenvalue-based subgraph analysis, and global features, such as moments, are used and are finally combined as an indicator to detect if any anomalies of human crowd(s) present in the scene. Experimental results obtained by using extensive dataset show that our system is effective in detecting anomalous events for uncontrolled environment of surveillance videos. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> A. Global Anomaly Detection <s> People naturally escape from a place when unexpected events happen. Based on this observation, efficient detection of crowd escape behavior in surveillance videos is a promising way to perform timely detection of anomalous situations. In this paper, we propose a Bayesian framework for escape detection by directly modeling crowd motion in both the presence and absence of escape events. Specifically, we introduce the concepts of potential destinations and divergent centers to characterize crowd motion in the above two cases respectively, and construct the corresponding class-conditional probability density functions of optical flow. Escape detection is finally performed based on the proposed Bayesian framework. Although only data associated with nonescape behavior are included in the training set, the density functions associated with the case of escape can also be adaptively updated using observed data. In addition, the identified divergent centers indicate possible locations at which the unexpected events occur. The performance of our proposed method is validated in a number of experiments on crowd escape detection in various scenarios. <s> BIB006
Usually, the self-organization effects occurring in crowds result in regular motion patterns. However, when abnormal events affecting public safety happen, such as fires, explosions, transportation disasters, people would escape, and it causes the crowd dynamics into a completely different state. Global anomaly detection aims to distinguish the abnormal states of crowd from normal ones. Related methodologies usually tend to detect the changes or events based on the apparent motion estimated on the whole. It is also important for a global anomaly detection system to not only do well in detecting the presence of anomaly in the scene, but also accurately determine the starting and end of the events, as well as the transitions between them. It should be noticed that holistic approaches for crowd behavior recognition mentioned in Section V, such as BIB001 , BIB003 , BIB004 , BIB002 , can be applied for global crowd anomalies detection. There also exist some works specifically for anomaly detection, in global style. In Chen et al. BIB005 , each isolated region is considered as a vertex and a human crowd is represented with a graph. To effectively model the topology variations, local characteristics (e.g. triangle deformations and eigenvalue-based subgraph analysis), and global features (e.g. moments) are used. They are finally combined as an indicator to detect if any anomaly of the crowd presents in the scene. Recently, a Bayesian framework for crowd escape behavior detection in videos was proposed BIB006 , to directly model crowd motions as non-escape and escape. Crowd motions are characterized using optical flow fields, and the associated classconditional probability density functions are constructed based on the field attributes. Crowd escape behavior can be detected by a Bayesian formulation. Experiments demonstrated that the method is more accurate than state-of-the-art techniques in detecting crowd escape behavior. However, this method cannot be applied to high density crowded scenes for the moment, since the crowd escape behavior in this case is significantly different from that in low or medium density crowded scenes.
Crowded Scene Analysis: A Survey <s> Capturing Temporal Statistics in Distribution Based Hidden Markov Models <s> A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectation-maximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time- series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (for example, fire, steam, water, vehicle and pedestrian traffic, and so forth). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (for example, optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> Capturing Temporal Statistics in Distribution Based Hidden Markov Models <s> Extremely crowded scenes present unique challenges to video analysis that cannot be addressed with conventional approaches. We present a novel statistical framework for modeling the local spatio-temporal motion pattern behavior of extremely crowded scenes. Our key insight is to exploit the dense activity of the crowded scene by modeling the rich motion patterns in local areas, effectively capturing the underlying intrinsic structure they form in the video. In other words, we model the motion variation of local space-time volumes and their spatial-temporal statistical behaviors to characterize the overall behavior of the scene. We demonstrate that by capturing the steady-state motion behavior with these spatio-temporal motion pattern models, we can naturally detect unusual activity as statistical deviations. Our experiments show that local spatio-temporal motion pattern modeling offers promising results in real-world scenes with complex activities that are hard for even human observers to analyze. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> Capturing Temporal Statistics in Distribution Based Hidden Markov Models <s> A novel framework for anomaly detection in crowded scenes is presented. Three properties are identified as important for the design of a localized video representation suitable for anomaly detection in such scenes: 1) joint modeling of appearance and dynamics of the scene, and the abilities to detect 2) temporal, and 3) spatial abnormalities. The model for normal crowd behavior is based on mixtures of dynamic textures and outliers under this model are labeled as anomalies. Temporal anomalies are equated to events of low-probability, while spatial anomalies are handled using discriminant saliency. An experimental evaluation is conducted with a new dataset of crowded scenes, composed of 100 video sequences and five well defined abnormality categories. The proposed representation is shown to outperform various state of the art anomaly detection techniques. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> Capturing Temporal Statistics in Distribution Based Hidden Markov Models <s> Abnormal crowd behavior detection is an important research issue in computer vision. The traditional methods first extract the local spatio-temporal cuboid from video. Then the cuboid is described by optical flow or gradient features, etc. Unfortunately, because of the complex environmental conditions, such as severe occlusion, over-crowding, etc., the existing algorithms cannot be efficiently applied. In this paper, we derive the high-frequency and spatio-temporal (HFST) features to detect the abnormal crowd behaviors in videos. They are obtained by applying the wavelet transform to the plane in the cuboid which is parallel to the time direction. The high-frequency information characterize the dynamic properties of the cuboid. The HFST features are applied to the both global and local abnormal crowd behavior detection. For the global abnormal crowd behavior detection, Latent Dirichlet allocation is used to model the normal scenes. For the local abnormal crowd behavior detection, Multiple Hidden Markov Models, with an competitive mechanism, is employed to model the normal scenes. The comprehensive experiment results show that the speed of detection has been greatly improved using our approach. Moreover, a good accuracy has been achieved considering the false positive and false negative detection rates. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> Capturing Temporal Statistics in Distribution Based Hidden Markov Models <s> Unusual event detection in crowded scenes remains challenging because of the diversity of events and noise. In this paper, we present a novel approach for unusual event detection via sparse reconstruction of dynamic textures over an overcomplete basis set, with the dynamic texture described by local binary patterns from three orthogonal planes (LBPTOP). The overcomplete basis set is learnt from the training data where only the normal items observed. In the detection process, given a new observation, we compute the sparsecoefficients using the Dantzig Selector algorithm which was proposed in the literature of compressed sensing. Then the reconstruction errors are computed, based on which we detect the abnormal items. Our application can be used to detect both local and global abnormal events. We evaluate our algorithm on UCSD Abnormality Datasets for local anomaly detection, which is shown to outperform current state-of-the-art approaches, and we also get promising results for rapid escape detection using the PETS2009 dataset. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> Capturing Temporal Statistics in Distribution Based Hidden Markov Models <s> As surveillance becomes ubiquitous, the amount of data to be processed grows along with the demand for manpower to interpret the data. A key goal of surveillance is to detect behaviors that can be considered anomalous. As a result, an extensive body of research in automated surveillance has been developed, often with the goal of automatic detection of anomalies. Research into anomaly detection in automated surveillance covers a wide range of domains, employing a vast array of techniques. This review presents an overview of recent research approaches on the topic of anomaly detection in automated surveillance. The reviewed studies are analyzed across five aspects: surveillance target, anomaly definitions and assumptions, types of sensors used and the feature extraction processes, learning methods, and modeling algorithms. <s> BIB006 </s> Crowded Scene Analysis: A Survey <s> Capturing Temporal Statistics in Distribution Based Hidden Markov Models <s> The detection and localization of anomalous behaviors in crowded scenes is considered, and a joint detector of temporal and spatial anomalies is proposed. The proposed detector is based on a video representation that accounts for both appearance and dynamics, using a set of mixture of dynamic textures models. These models are used to implement 1) a center-surround discriminant saliency detector that produces spatial saliency scores, and 2) a model of normal behavior that is learned from training data and produces temporal saliency scores. Spatial and temporal anomaly maps are then defined at multiple spatial scales, by considering the scores of these operators at progressively larger regions of support. The multiscale scores act as potentials of a conditional random field that guarantees global consistency of the anomaly judgments. A data set of densely crowded pedestrian walkways is introduced and used to evaluate the proposed anomaly detector. Experiments on this and other data sets show that the latter achieves state-of-the-art anomaly detection results. <s> BIB007
While the set of prototypes provides a picture of similar activities in the scene, it does not capture the relationship between their occurrences. As a result, we cannot assume that the approach in the previous section will lead to robust detection of unusual activities. We will now consider modeling and leveraging the temporal dynamics of the motion a) Hidden Markov Model: The hidden Markov model (HMM) is able to take into account the inherently dynamic nature of the observed features BIB006 . It is applicable in video event detection as well as anomaly detection. Based on HMM, Kratz et al. BIB002 have presented a framework for modeling local spatio-temporal motion behaviors in extremely crowded scenes. Fig. 8 illustrates a single HMM for each spatial location of observation. In the training phase, the temporal relationship between local motion patterns is captured via a distribution-based HMM, and the spatial relationship is modeled by a coupled HMM. In the testing phase, unusual events are identified as statistical deviations in video sequences of the same scene. The experimental results indicated that the proposed representation is suitable for analyzing extremely crowded scenes. However, the authors only set up one HMM for each local area, so that the method could work only for limited kinds of normal behaviors or specific crowded scenes. If we change the normal behavior type, the detection rate of the abnormal behaviors will decrease, unless the model is re-trained. A similar scheme has been proposed in Wang et al. BIB004 . In their approach, the high-frequency and spatio-temporal (HFST) information is computed by the wavelet transformation, to characterize the dynamic properties of the local region. Then, in order to detect various local abnormal crowd events, multiple HMMs are adopted, and each HMM accounts for a type of behavior. b) Dynamic Texture Model: The dynamic texture is a spatio-temporal generative model for video. It represents video sequences as observations from a linear dynamical system, and exhibits spatio-temporal stationary properties BIB005 . Recent research works BIB003 , BIB007 have shown that dynamic texture is more suitable for local unusual event detection in crowded scenes than optical flow. Originally proposed for motion segmentation in Chan et al. BIB001 , the mixture of dynamic texture (MDT) is a generative model, where a collection of video sequences are modeled as samples from a set of underlying dynamic textures. Fig. 9 illustrates the MDT from a video patch. Based on MDT, Li et al. BIB003 , BIB007 proposed a joint detector of temporal and spatial anomalies in crowded scenes. The proposed detector is based on a video representation that accounts for both appearance and dynamics, using a set of MDT models. The normal patterns is learned through MDT model per scene the spatial support of anomaly detection, for both spatial and temporal anomalies.
Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> It is suggested that the motion of pedestrians can be described as if they would be subject to ``social forces.'' These ``forces'' are not directly exerted by the pedestrians' personal environment, but they are a measure for the internal motivations of the individuals to perform certain actions (movements). The corresponding force concept is discussed in more detail and can also be applied to the description of other behaviors. In the presented model of pedestrian behavior several force terms are essential: first, a term describing the acceleration towards the desired velocity of motion; second, terms reflecting that a pedestrian keeps a certain distance from other pedestrians and borders; and third, a term modeling attractive effects. The resulting equations of motion of nonlinearly coupled Langevin equations. Computer simulations of crowds of interacting pedestrians show that the social force model is capable of describing the self-organization of several observed collective effects of pedestrian behavior very realistically. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> This paper develops the theory and computation of Lagrangian Coherent Structures (LCS), which are defined as ridges of Finite-Time Lyapunov Exponent (FTLE) fields. These ridges can be seen as finite-time mixing templates. Such a framework is common in dynamical systems theory for autonomous and time-periodic systems, in which examples of LCS are stable and unstable manifolds of fixed points and periodic orbits. The concepts defined in this paper remain applicable to flows with arbitrary time dependence and, in particular, to flows that are only defined (computed or measured) over a finite interval of time. Previous work has demonstrated the usefulness of FTLE fields and the associated LCSs for revealing the Lagrangian behavior of systems with general time dependence. However, ridges of the FTLE field need not be exactly advected with ::: the flow. The main result of this paper is an estimate for the flux across an LCS, which shows that the flux is small, and in most cases negligible, for well-defined LCSs or those that rotate at a speed comparable to the local Eulerian velocity field, and are computed from FTLE fields with a sufficiently long integration time. Under these hypotheses, the structures represent nearly invariant manifolds even in systems with arbitrary time dependence. ::: The results are illustrated on three examples. The first is a simplified dynamical model of a double-gyre flow. The second is surface current data collected by high-frequency radar stations along the coast of Florida and the third is unsteady separation over an airfoil. In all cases, the existence of LCSs governs the transport and it is verified numerically that the flux of particles through these distinguished lines is indeed negligible. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> This paper proposes a framework in which Lagrangian particle dynamics is used for the segmentation of high density crowd flows and detection of flow instabilities. For this purpose, a flow field generated by a moving crowd is treated as an aperiodic dynamical system. A grid of particles is overlaid on the flow field, and is advected using a numerical integration scheme. The evolution of particles through the flow is tracked using a flow map, whose spatial gradients are subsequently used to setup a Cauchy Green deformation tensor for quantifying the amount by which the neighboring particles have diverged over the length of the integration. The maximum eigenvalue of the tensor is used to construct a finite time Lyapunov exponent (FTLE) field, which reveals the Lagrangian coherent structures (LCS) present in the underlying flow. The LCS divide flow into regions of qualitatively different dynamics and are used to locate boundaries of the flow segments in a normalized cuts framework. Any change in the number of flow segments over time is regarded as an instability, which is detected by establishing correspondences between flow segments over time. The experiments are conducted on a challenging set of videos taken from Google Video and a National Geographic documentary. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> The ability of human visual system to detect visual saliency is extraordinarily fast and reliable. However, computational modeling of this basic intelligent behavior still remains a challenge. This paper presents a simple method for the visual saliency detection. Our model is independent of features, categories, or other forms of prior knowledge of the objects. By analyzing the log-spectrum of an input image, we extract the spectral residual of an image in spectral domain, and propose a fast method to construct the corresponding saliency map in spatial domain. We test this model on both natural pictures and artificial images such as psychological patterns. The result indicate fast and robust saliency detection of our method. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> Computer vision algorithms have played a pivotal role in commercial video surveillance systems for a number of years. However, a common weakness among these systems is their inability to handle crowded scenes. In this thesis, we have developed algorithms that overcome some of the challenges encountered in videos of crowded environments such as sporting events, religious festivals, parades, concerts, train stations, airports, and malls. We adopt a top-down approach by first performing a global-level analysis that locates dynamically distinct crowd regions within the video. This knowledge is then employed in the detection of abnormal behaviors and tracking of individual targets within crowds. In addition, the thesis explores the utility of contextual information necessary for persistent tracking and re-acquisition of objects in crowded scenes. ::: For the global-level analysis, a framework based on Lagrangian Particle Dynamics is proposed to segment the scene into dynamically distinct crowd regions or groupings. For this purpose, the spatial extent of the video is treated as a phase space of a time-dependent dynamical system in which transport from one region of the phase space to another is controlled by the optical flow. Next, a grid of particles is advected forward in time through the phase space using a numerical integration to generate a "flow map". The flow map relates the initial positions of particles to their final positions. The spatial gradients of the flow map are used to compute a Cauchy Green Deformation tensor that quantifies the amount by which the neighboring particles diverge over the length of the integration. The maximum eigenvalue of the tensor is used to construct a forward Finite Time Lyapunov Exponent (FTLE) field that reveals the Attracting Lagrangian Coherent Structures (LCS). The same process is repeated by advecting the particles backward in time to obtain a backward FTLE field that reveals the repelling LCS. The attracting and repelling LCS are the time dependent invariant manifolds of the phase space and correspond to the boundaries between dynamically distinct crowd flows. The forward and backward FTLE fields are combined to obtain one scalar field that is segmented using a watershed segmentation algorithm to obtain the labeling of distinct crowd-flow segments. Next, abnormal behaviors within the crowd are localized by detecting changes in the number of crowd-flow segments over time. ::: Next, the global-level knowledge of the scene generated by the crowd-flow segmentation is used as an auxiliary source of information for tracking an individual target within a crowd. This is achieved by developing a scene structure-based force model. This force model captures the notion that an individual, when moving in a particular scene, is subjected to global and local forces that are functions of the layout of that scene and the locomotive behavior of other individuals in his or her vicinity. The key ingredients of the force model are three floor fields that are inspired by research in the field of evacuation dynamics; namely, Static Floor Field (SFF), Dynamic Floor Field (DFF), and Boundary Floor Field (BFF). These fields determine the probability of moving from one location to the next by converting the long-range forces into local forces. The SFF specifies regions of the scene that are attractive in nature, such as an exit location. The DFF, which is based on the idea of active walker models, corresponds to the virtual traces created by the movements of nearby individuals in the scene. The BFF specifies influences exhibited by the barriers within the scene, such as walls and no-entry areas. By combining influence from all three fields with the available appearance information, we are able to track individuals in high-density crowds. The results are reported on real-world sequences of marathons and railway stations that contain thousands of people. A comparative analysis with respect to an appearance-based mean shift tracker is also conducted by generating the ground truth. The result of this analysis demonstrates the benefit of using floor fields in crowded scenes. ::: The occurrence of occlusion is very frequent in crowded scenes due to a high number of interacting objects. To overcome this challenge, we propose an algorithm that has been developed to augment a generic tracking algorithm to perform persistent tracking in crowded environments. The algorithm exploits the contextual knowledge, which is divided into two categories consisting of motion context (MC) and appearance context (AC). The MC is a collection of trajectories that are representative of the motion of the occluded or unobserved object. These trajectories belong to other moving individuals in a given environment. The MC is constructed using a clustering scheme based on the Lyapunov Characteristic Exponent (LCE), which measures the mean exponential rate of convergence or divergence of the nearby trajectories in a given state space. Next, the MC is used to predict the location of the occluded or unobserved object in a regression framework. It is important to note that the LCE is used for measuring divergence between a pair of particles while the FTLE field is obtained by computing the LCE for a grid of particles. The appearance context (AC) of a target object consists of its own appearance history and appearance information of the other objects that are occluded. The intent is to make the appearance descriptor of the target object more discriminative with respect to other unobserved objects, thereby reducing the possible confusion between the unobserved objects upon re-acquisition. This is achieved by learning the distribution of the intra-class variation of each occluded object using all of its previous observations. In addition, a distribution of inter-class variation for each target-unobservable object pair is constructed. Finally, the re-acquisition decision is made using both the MC and the AC. <s> BIB005 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> In this paper we introduce a novel method to detect and localize abnormal behaviors in crowd videos using Social Force model. For this purpose, a grid of particles is placed over the image and it is advected with the space-time average of optical flow. By treating the moving particles as individuals, their interaction forces are estimated using social force model. The interaction force is then mapped into the image plane to obtain Force Flow for every pixel in every frame. Randomly selected spatio-temporal volumes of Force Flow are used to model the normal behavior of the crowd. We classify frames as normal and abnormal by using a bag of words approach. The regions of anomalies in the abnormal frames are localized using interaction forces. The experiments are conducted on a publicly available dataset from University of Minnesota for escape panic scenarios and a challenging dataset of crowd videos taken from the web. The experiments show that the proposed method captures the dynamics of the crowd behavior successfully. In addition, we have shown that the social force approach outperforms similar approaches based on pure optical flow. <s> BIB006 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> We present a novel method for the discovery and statistical representation of motion patterns in a scene observed by a static camera. Related methods involving learning of patterns of activity rely on trajectories obtained from object detection and tracking systems, which are unreliable in complex scenes of crowded motion. We propose a mixture model representation of salient patterns of optical flow, and present an algorithm for learning these patterns from dense optical flow in a hierarchical, unsupervised fashion. Using low level cues of noisy optical flow, K-means is employed to initialize a Gaussian mixture model for temporally segmented clips of video. The components of this mixture are then filtered and instances of motion patterns are computed using a simple motion model, by linking components across space and time. Motion patterns are then initialized and membership of instances in different motion patterns is established by using KL divergence between mixture distributions of pattern instances. Finally, a pixel level representation of motion patterns is proposed by deriving conditional expectation of optical flow. Results of extensive experiments are presented for multiple surveillance sequences containing numerous patterns involving both pedestrian and vehicular traffic. <s> BIB007 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> A novel method for crowd flow modeling and anomaly detection is proposed for both coherent and incoherent scenes. The novelty is revealed in three aspects. First, it is a unique utilization of particle trajectories for modeling crowded scenes, in which we propose new and efficient representative trajectories for modeling arbitrarily complicated crowd flows. Second, chaotic dynamics are introduced into the crowd context to characterize complicated crowd motions by regulating a set of chaotic invariant features, which are reliably computed and used for detecting anomalies. Third, a probabilistic framework for anomaly detection and localization is formulated. The overall work-flow begins with particle advection based on optical flow. Then particle trajectories are clustered to obtain representative trajectories for a crowd flow. Next, the chaotic dynamics of all representative trajectories are extracted and quantified using chaotic invariants known as maximal Lyapunov exponent and correlation dimension. Probabilistic model is learned from these chaotic feature set, and finally, a maximum likelihood estimation criterion is adopted to identify a query video of a scene as normal or abnormal. Furthermore, an effective anomaly localization algorithm is designed to locate the position and size of an anomaly. Experiments are conducted on known crowd data set, and results show that our method achieves higher accuracy in anomaly detection and can effectively localize anomalies. <s> BIB008 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> Abnormal crowd behavior detection is an important research issue in computer vision. However, complex real-life situations (e.g., severe occlusion, over-crowding, etc.) still challenge the effectiveness of previous algorithms. Recently, the methods based on spatio-temporal cuboid are popular in video analysis. To our knowledge, the spatio-temporal cuboid is always extracted randomly from a video sequence in the existing methods. The size of each cuboid and the total number of cuboids are determined empirically. The extracted features either contain the redundant information or lose a lot of important information which extremely affect the accuracy. In this paper, we propose an improved method. In our method, the spatio-temporal cuboid is no longer determined arbitrarily, but by the information contained in the video sequence. The spatio-temporal cuboid is extracted from video sequence with adaptive size. The total number of cuboids and the extracting positions can be determined automatically. Moreover, to compute the similarity between two spatio-temporal cuboids with different sizes, we design a novel data structure of codebook which is constructed as a set of two-level trees. The experiment results show that the detection rates of false positive and false negative are significantly reduced. Keywords: Codebook, latent dirichlet allocation (LDA), social force model, spatio-temporal cuboid. <s> BIB009 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> Unusual event detection in crowded scenes remains challenging because of the diversity of events and noise. In this paper, we present a novel approach for unusual event detection via sparse reconstruction of dynamic textures over an overcomplete basis set, with the dynamic texture described by local binary patterns from three orthogonal planes (LBPTOP). The overcomplete basis set is learnt from the training data where only the normal items observed. In the detection process, given a new observation, we compute the sparsecoefficients using the Dantzig Selector algorithm which was proposed in the literature of compressed sensing. Then the reconstruction errors are computed, based on which we detect the abnormal items. Our application can be used to detect both local and global abnormal events. We evaluate our algorithm on UCSD Abnormality Datasets for local anomaly detection, which is shown to outperform current state-of-the-art approaches, and we also get promising results for rapid escape detection using the PETS2009 dataset. <s> BIB010 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> We propose a new scheme for detecting and localizing the abnormal crowd behavior in video sequences. The proposed method starts from the assumption that the interaction force, as estimated by the Social Force Model (SFM), is a significant feature to analyze crowd behavior. We step forward this hypothesis by optimizing this force using Particle Swarm Optimization (PSO) to perform the advection of a particle population spread randomly over the image frames. The population of particles is drifted towards the areas of the main image motion, driven by the PSO fitness function aimed at minimizing the interaction force, so as to model the most diffused, normal, behavior of the crowd. In this way, anomalies can be detected by checking if some particles (forces) do not fit the estimated distribution, and this is done by a RANSAC-like method followed by a segmentation algorithm to finely localize the abnormal areas. A large set of experiments are carried out on public available datasets, and results show the consistent higher performances of the proposed method as compared to other state-of-the-art algorithms, proving the goodness of the proposed approach. <s> BIB011 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> This paper presents a novel method for global anomaly detection in crowded scenes. The proposed method introduces the Particle Swarm Optimization (PSO) method as a robust algorithm for optimizing the interaction force computed using the Social Force Model (SFM). The main objective of the proposed method is to drift the population of particles towards the areas of the main image motion. Such displacement is driven by the PSO fitness function aimed at minimizing the interaction force, so as to model the most diffused and typical crowd behavior. Experiments are extensively conducted on public available datasets, namely, UMN and PETS 2009, and also on a challenging dataset of videos taken from Internet. The experimental results revealed that the proposed scheme outperforms all the available state-of-the-art algorithms for global anomaly detection. <s> BIB012 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> This paper proposes a novel method to locate crowd behavior instability spatio-temporally using a velocity-field based social force model. Considering the impacts of velocity field on interaction force between individuals, we establish an improved social force model by introducing collision probability in view of velocity distribution. As compared with commonly-used social force model, which defines interaction force as a dependent variable of relative geometric (physical) position of the individuals, this improved model can provide a better prediction of interactions using the collision probability in a dynamic crowd. With spatio-temporal instability analysis, we can extract video clips with potential abnormality and as well locate region of interest where abnormality is likely to happen. The experimental results demonstrate that the proposed method can be applied to detection of abnormal events with high accuracy of instability estimation due to the velocity-field based social force model. <s> BIB013 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> In this paper, we present a novel method to detect two typical abnormal activities: pedestrain gathering and running. The method is based on the potential energy and kinetic energy. Reliable estimation of crowd density and crowd distribution are firstly introduced into the detection of anomalies. Estimation of crowd density is obtained from the image potential energy model. By building the foreground histogram on the X and Y axis respectively, the probability distribution of the histogram can be obtained, and then we define the Crowd Distribution Index (CDI) to represent the dispersion. The Crowd Distribution Index (CDI) is used to detect pedestrains gathering. The kinetic energy is determined by computation of optical flow and Crowd Distribution Index, and then used to detect people running. The detection for abnormal activities is based on the threshold analysis. Without training data, the model can robustly detect abnormal behaviors in low and medium crowd density with low computation load. <s> BIB014 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> Reliable estimation of crowd density in public plays an important role on intelligent surveillance in recent years. There have been a lot of research on people counting; however, most of them only consider crowd with slight occlusions and their algorithms usually accompany with high computational complexity. In this paper, we present a simple model based on image potential energy to estimate the crowd density. The image potential energy is inspired by gravitational potential energy. Based on the facts that the pixels related to the object on the image plane are fewer if the object is farther away from the camera and the farther objects appear closer to the origin of the image plane, we define the image potential energy on the image plane. The main characteristics of the model is that the image potential energy related to objects is almost invariable no matter how far away the object being from the camera. The potential energy model can deal with severe occlusions with low computational complexity. It is adaptive to low and high density of crowd in public scenes. When the crowd density is below 10, the model accuracy rate is about 80% and the error is about 1 people count for a series of frames. When the crowd density varies from 10 to 40, the crowd density changes very fast, we can't make accuracy analysis as in low crowd density; however, for one single frame, the error rate is below 7% while the average error varies from 1 to 3 in the experiments. <s> BIB015 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> To reduce cognitive overload in CCTV monitoring, it is critical to have an automated way to focus the attention of operators on interesting events taking place in crowded public scenes. We present a global motion saliency detection method based on spectral analysis, which aims to discover and localise interesting regions, of which the flows are salient in relation to the dominant crowd flows. The method is fast and does not rely on prior knowledge specific to a scene and any training videos. We demonstrate its potential on public scene videos, with applications in salient action detection, counter flow detection, and unstable crowd flow detection. <s> BIB016 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> Abnormal crowd behavior detection is an important issue in crowd surveillance. In this paper, a novel local pressure model is proposed to detect the abnormality in large-scale crowd scene based on local crowd characteristics. These characteristics include the local density and velocity which are very significant parameters for measuring the dynamic of crowd. A grid of particles is placed over the image to reduce the computation of the local crowd parameters. Local pressure is generated by applying these local characteristics in pressure model. Histogram is utilized to extract the statistical property of the magnitude and direction of the pressure. The crowd feature vector of the whole frame is obtained through the analysis of Histogram of Oriented Pressure (HOP). SVM and median filter are then adopted to detect the anomaly. The performance of the proposed method is evaluated on publicly available datasets from UMN. The experimental results show that the proposed method can achieve a higher accuracy than that of the previous methods on detecting abnormal crowd behavior. <s> BIB017 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> Using Behavior Entropy model, we introduce a novel method to detect and localize abnormal behaviors in crowd scenes. Our key insight is to estimate the behavior entropy of each pixel and whole scene by considering defined pixels' behavior certainty. For this purpose, we introduce information theory and energetics concept to define pixel's behavior certainty based on video's spatial-temporal information. Scene entropy behavior and behavior entropy image can be used to detect and localize anomalies respectively. We discuss parameters' setting by analyzing how they influence model's detecting and localizing abilities, and our model is robust to parameter setting. The experiments are conducted on several publicly available datasets, and show that the proposed method captures the dynamics of the crowd behavior successfully. The results of our method, indicates that the method outperforms the state-of-the-art methods in detecting and localizing several kinds of abnormal behaviors in the crowd. <s> BIB018 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> As a controllable medium, video-realistic crowds are important for creating the illusion of a populated reality in special effects, games, and architectural visualization. While recent progress in simulation and motion captured-based techniques for crowd synthesis has focused on natural macroscale behavior, this paper addresses the complementary problem of synthesizing crowds with realistic microscale behavior and appearance. Example-based synthesis methods such as video textures are an appealing alternative to conventional model-based methods, but current techniques are unable to represent and satisfy constraints between video sprites and the scene. This paper describes how to synthesize crowds by segmenting pedestrians from input videos of natural crowds and optimally placing them into an output video while satisfying environmental constraints imposed by the scene. We introduce crowd tubes, a representation of video objects designed to compose a crowd of video billboards while avoiding collisions between static and dynamic obstacles. The approach consists of representing crowd tube samples and constraint violations with a conflict graph. The maximal independent set yields a dense constraint-satisyfing crowd composition. We present a prototype system for the capture, analysis, synthesis, and control of video-based crowds. Several results demonstrate the system's ability to generate videos of crowds which exhibit a variety of natural behaviors. <s> BIB019 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> This paper presents an approach for detecting suspicious events in videos by using only the video itself as the training samples for valid behaviors. These salient events are obtained in real-time by detecting anomalous spatio-temporal regions in a densely sampled video. The method codes a video as a compact set of spatio-temporal volumes, while considering the uncertainty in the codebook construction. The spatio-temporal compositions of video volumes are modeled using a probabilistic framework, which calculates their likelihood of being normal in the video. This approach can be considered as an extension of the Bag of Video words (BOV) approaches, which represent a video as an order-less distribution of video volumes. The proposed method imposes spatial and temporal constraints on the video volumes so that an inference mechanism can estimate the probability density functions of their arrangements. Anomalous events are assumed to be video arrangements with very low frequency of occurrence. The algorithm is very fast and does not employ background subtraction, motion estimation or tracking. It is also robust to spatial and temporal scale changes, as well as some deformations. Experiments were performed on four video datasets of abnormal activities in both crowded and non-crowded scenes and under difficult illumination conditions. The proposed method outperformed all other approaches based on BOV that do not account for contextual information. <s> BIB020 </s> Crowded Scene Analysis: A Survey <s> NORMALCY AND ANOMALY MODELING <s> This paper addresses the problem of detecting and localizing abnormal activities in crowded scenes. A spatiotemporal Laplacian eigenmap method is proposed to extract different crowd activities from videos. This is achieved by learning the spatial and temporal variations of local motions in an embedded space. We employ representatives of different activities to construct the model which characterizes the regular behavior of a crowd. This model of regular crowd behavior allows the detection of abnormal crowd activities both in local and global contexts and the localization of regions which show abnormal behavior. Experiments on the recently published data sets show that the proposed method achieves comparable results with the state-of-the-art methods without sacrificing computational simplicity. <s> BIB021
In this section we review the MDT model, discuss the intensities. Observations of low probability under these GMMs are declared foreground. For anomaly detection in crowds, the GMM is replaced by an MDT, and the pixel grid replaced by one of preset displacement. Grid locations define the center of video cells, from which video patches are extracted. The patches extracted from a subregion (group of cells) are used to learn an MDT, during a training phase, as illustrated in Fig. 1 . After this phase, subregion patches of low probability under the associated MDT are considered anomalies. Given patch x 1:τ , the distribution of the hidden state sequence s τ 1 under the i th DT component, p S|X (s 1:τ |x 1:τ , z = i), is estimated with a Kalman filter and smoother BIB005 , , as discussed in BIB019 . The value of the temporal anomaly map at location l is the negative-log probability of the most-likely state sequence for the patch at l where s {i} 1:τ (l) = argmax s 1:τ p(s 1:τ |x τ (l), z = i). We note that this generalizes the mixture of PCA models of optical flow BIB007 . The matrix C z of (2b) is a PCA basis for patches drawn from mixture component z, but the PCA decomposition reports to patch appearance, not subregion, in training phase. A multi-scale temporal anomaly map is produced by measuring the negative log probability of each video patch under the MDT of the corresponding region. In the testing phase, subregion patches of low probability under the associated MDT are considered as anomalies. c) Bag-of-Words Model: One representative approach in anomaly detection is to use local spatio-temporal video volumes based bag-of-words (BOW) models. This approach usually extracts local low-level visual features, such as motion and texture, either by constructing a pixel-level background model and behavior templates, or by employing spatiotemporal video volumes BIB020 . In BIB020 , Roshtkhari et al. extended BOW model for detecting suspicious events in videos. The method codes a video as a compact set of spatio-temporal volumes. Uncertainty is considered in the codebook construction. The spatio-temporal compositions of video volumes are modeled using a probabilistic framework, and anomalous events are assumed to be video presentations with very low frequency of occurrence. As a result, an observation is considered to be abnormal if it cannot be reconstructed with previous observations. LDA has been adopted in Wang et al. BIB009 based on the spatio-temporal cuboid from the video sequence with an adaptive size. To compute the similarity between two spatiotemporal cuboids with different sizes, they designed a novel data structure of codebook constructed as a set of two-level trees. LDA model is used to learn an appropriate number of topics to represent these scenarios. A new sample will be classified as anomaly if it does not belong to these topics. [61] presented a novel algorithm for abnormal event detection based on the sparse reconstruction cost (SRC) for multi-level histogram of optical flows. Given an image sequence or a collection of local spatio-temporal patches, MHOF features are calculated. Then, SRC over the normal dictionary is used to measure the normalness of the testing sample. By introducing a prior weight of each basis during sparse reconstruction, the proposed SRC is more robust than other outlier detection criteria. Combined with dynamic texture, Xu et al. BIB010 proposed a novel approach for unusual event detection via sparse reconstruction on an over-complete basis set. The dynamic texture is described by local binary patterns from three orthogonal planes (LBPTOP). In the detection process, given the basis set learned from the training procedure and the input observation, the sparse coefficients are computed and the reconstruction error is defined. The unusual events are identified as those dynamic textures with high reconstruction error. e) Manifold Learning Model: In BIB021 , the manifold learning-based framework has also been applied for the detection of anomalies in a crowded scene. The spatiotemporal Lagrangian eigenmap method is employed to study the local motion structure of the scene. Besides, a pairwise graph is constructed by considering the visual context of multiple local patches in both spatial and temporal domains. Such a process embeds local motion patterns into different spatial locations where similar patterns are usually close and different patterns are far apart. This allows to cluster embedding points and to discover different motion patterns in the scene. Finally, a local probability model is used to localize the abnormal regions in the crowded scene, where the clusters with small data points or outliers in the embedded space can be considered as abnormal. 2) Physics-Inspired Approach: Several physics-inspired models have been proposed for crowd representation, and they have also been utilized and combined with machine learning techniques for anomaly detection. For example, continuumbased approach and agent-based approach from crowd simulation field both have been adopted for anomaly detection in crowded scenes. a) Flow Field Model: Usually, we need to understand how crowds evolve with time and try to find some regular patterns. So that we can know immediately where and how the motion pattern of the crowd changes. As noted in section IV, the work of Ali et al. BIB003 has shown success on motion pattern segmentation. Furthermore, they also extended their framework to anomaly detection. They constructed a finite time Lyapunov exponent (FTLE) field whose boundaries vary with the crowd changes in terms of the dynamic behavior of the flow. New Lagrangian coherent structures (LCS) BIB002 will appear in the FTLE field exactly at those locations where the changes happen. Any change in the number of flow segments over time is regarded as an instability, and it is detected by establishing correspondences between flow segments over time. Wu et al. BIB008 proposed a method for crowd flow modeling and anomaly detection for both structured and unstructured scenes. The overall work-flow begins with particle advection based on optical flow, and particle trajectories are clustered to obtain representative trajectories for a crowd flow. Next, the chaotic dynamics of all representative trajectories are extracted and quantified using chaotic invariants. This is known as maximal Lyapunov exponent and correlation dimension in dynamic system. Probability model is learned from these chaotic feature sets. Finally, a maximum likelihood estimation criterion is adopted to identify a query video of a scene as normal or abnormal. Commencing with the optical flow field estimation adapted from , Loy et al. BIB016 presented a global motion saliency detection framework. The associated flow vector in the field is represented by its phase angle −π ≤ ϕ x,y,t ≤ π and the velocity magnitude γ x,y,t ≥ 0, referred as motion signature for salient motion detection. Then the spectral residual approach BIB004 is applied on the motion signature for motion saliency detection. This method has shown its potential in unstable region detection in extremely crowded scenes, and gave fairly similar results to BIB003 . b) Social Force Model: The social force model (SFM) has been successfully employed in research fields as simulation and analysis of crowds. Mehran et al. BIB006 introduced a novel method to detect and localize abnormal behaviors in crowd videos using SFM BIB001 . For this purpose, the framework proposed by Ali et al. BIB003 is utilized to compute particle flows, and their interaction forces are estimated using SFM. The interaction force is then mapped into the image plane to obtain force flow for every pixel in every frame. Randomly selected spatio-temporal volumes of force flows are used to model the normal behavior of the crowd. Finally the frames are classified as normal or abnormal by using BOW. The regions of anomalies in the abnormal frames are localized using interaction forces. Inspired by BIB006 , some methods based on SFM to detect abnormal crowd behaviors were later proposed. In BIB011 , BIB012 , Raghavendra et al. introduced the particle swarm optimization (PSO) method for optimizing the interaction force computed using SFM. The main objective of the proposed method is to drift the population of particles towards the areas of the main image motion. Such displacement is driven by the PSO fitness function, which aims at minimizing the interaction force, so as to model the most diffused and typical crowd behavior. A velocity-field based SFM has been proposed in Zhao et al. BIB013 to locate crowd behavior instability spatio-temporally. The traditional SFM defines the interaction force as a dependent variable of relative geometric position of the individuals. Differently, the proposed improved model can provide a better prediction of interactions using the collision probability in a dynamic crowd. With spatio-temporal instability analysis, we can extract video clips with potential abnormality and locate the regions of interest where the abnormalities are likely to happen. c) Crowd Energy Model: The crowd has its own characteristics. For example, local density and velocity are key parameters for measuring the crowd dynamics. Yang et al. BIB017 proposed an efficient method based on the histogram of oriented pressure (HOP) to detect crowd anomaly. SFM and local binary pattern (LBP) are adopted to calculate the pressure. Cross histogram is utilized to produce the feature vector instead of parallel merging the magnitude histogram and direction histogram. Afterwards, support vector machine and median filter are adopted to detect the anomaly. In BIB014 , Xiong et al. proposed a novel method to detect two typical abnormal activities: pedestrian gathering and running. The method is based on the potential energy BIB015 and kinetic energy. A term called crowd distribution index (CDI) is defined to represent the dispersion, which can later determine the kinetic energy. Finally the abnormal activities are detected through threshold analysis. Another abnormal crowd behavior detection model using behavior entropy has been proposed in BIB018 . The key idea is to analyze the change of scene behavior entropy (SBE) over time, and localize abnormal behaviors according to pixels' behavior entropy distribution in image space. Experiments reveal that SBE of the frame will rise when running, dispersion, gathering or regressive walking occurs. This energy-based model can well denote the dispersion on different directions and locate moving information and interacting information among individuals. The model works owing to the fact that obvious differences exist between normal states and abnormal states in crowd dynamic characteristics. Usually some threshold-based methods are employed here, and the threshold usually has to be determined empirically when applied to different crowd scenes.
Crowded Scene Analysis: A Survey <s> VII. CROWD VIDEO DATASETS <s> This paper proposes a framework in which Lagrangian particle dynamics is used for the segmentation of high density crowd flows and detection of flow instabilities. For this purpose, a flow field generated by a moving crowd is treated as an aperiodic dynamical system. A grid of particles is overlaid on the flow field, and is advected using a numerical integration scheme. The evolution of particles through the flow is tracked using a flow map, whose spatial gradients are subsequently used to setup a Cauchy Green deformation tensor for quantifying the amount by which the neighboring particles have diverged over the length of the integration. The maximum eigenvalue of the tensor is used to construct a finite time Lyapunov exponent (FTLE) field, which reveals the Lagrangian coherent structures (LCS) present in the underlying flow. The LCS divide flow into regions of qualitatively different dynamics and are used to locate boundaries of the flow segments in a normalized cuts framework. Any change in the number of flow segments over time is regarded as an instability, which is detected by establishing correspondences between flow segments over time. The experiments are conducted on a challenging set of videos taken from Google Video and a National Geographic documentary. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> VII. CROWD VIDEO DATASETS <s> We propose a novel unsupervised learning framework to model activities and interactions in crowded and complicated scenes. Hierarchical Bayesian models are used to connect three elements in visual surveillance: low-level visual features, simple "atomic" activities, and interactions. Atomic activities are modeled as distributions over low-level visual features, and multi-agent interactions are modeled as distributions over atomic activities. These models are learnt in an unsupervised way. Given a long video sequence, moving pixels are clustered into different atomic activities and short video clips are clustered into different interactions. In this paper, we propose three hierarchical Bayesian models, Latent Dirichlet Allocation (LDA) mixture model, Hierarchical Dirichlet Process (HDP) mixture model, and Dual Hierarchical Dirichlet Processes (Dual-HDP) model. They advance existing language models, such as LDA [1] and HDP [2]. Our data sets are challenging video sequences from crowded traffic scenes and train station scenes with many kinds of activities co-occurring. Without tracking and human labeling effort, our framework completes many challenging visual surveillance tasks of board interest such as: (1) discovering typical atomic activities and interactions; (2) segmenting long video sequences into different interactions; (3) segmenting motions into different activities; (4) detecting abnormality; and (5) supporting high-level queries on activities and interactions. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> VII. CROWD VIDEO DATASETS <s> A method is proposed for identifying five crowd behaviors (bottlenecks, fountainheads, lanes, arches, and blocking) in visual scenes. In the algorithm, a scene is overlaid by a grid of particles initializing a dynamical system defined by the optical flow. Time integration of the dynamical system provides particle trajectories that represent the motion in the scene; these trajectories are used to locate regions of interest in the scene. Linear approximation of the dynamical system provides behavior classification through the Jacobian matrix; the eigenvalues determine the dynamic stability of points in the flow and each type of stability corresponds to one of the five crowd behaviors. The eigenvalues are only considered in the regions of interest, consistent with the linear approximation and the implicated behaviors. The algorithm is repeated over sequential clips of a video in order to record changes in eigenvalues, which may imply changes in behavior. The method was tested on over 60 crowd and traffic videos. <s> BIB003
With the development of crowded scene analysis, several crowd datasets are available now. In the following, we will list some existing benchmark crowd video datasets. In Table VI , the following information is given: brief descriptions, size of each database, the labeling level and the accessibility. UCF Crowd Dataset BIB001 This crowd dataset is collected mainly from the BBC Motion Gallery and Getty Images website, and it is publicly available. The most distinguishing feature is its variations in lighting and field of view, which can facilitate the performance evaluation of algorithms developed for crowded scenes. UMN Crowd Dataset [103] It is also a publicly available dataset containing normal and abnormal crowd videos from University of Minnesota. Each video consists of an initial part of a normal behavior and ends with sequences of the abnormal behavior. UCSD Anomaly Detection Dataset [104] This dataset was acquired with a stationary camera mounted at an elevation, overlooking pedestrian walkways. The crowd density in the walkways ranges from sparse to very crowded. Abnormal events are caused by either: (i) the circulation of non pedestrian entities in the walkways or (ii) anomalous pedestrian motion patterns. Violent-Flows Dataset [107] It is a dataset of real-world video footages of crowd violence, along with standard benchmark protocols designed to test both violent/non-violent classification and violence outbreak detection. All the videos were downloaded from YouTube, and the average length of a video clip is 3.60 seconds. CUHK Dataset BIB002 This dataset is for research on activity or behavior analysis in crowded scenes. It includes 2 subsets: a traffic dataset (MIT traffic) and a pedestrian dataset. The traffic dataset includes a traffic video sequence of 90 minutes long. Ground truths about pedestrians of some sampled frames are manually labeled. The pedestrian dataset were recorded in New York's grand center station, contain a 30 minutes long video sequence, without giving any ground truth or labeled data. QMUL Dataset [105] This dataset has two subsets: the first contains three different dense traffic flow videos at crossroads, of nearly 60 minutes long; the second contains a video of shopping mall from a publicly accessible webcam. Over 60,000 pedestrians were labeled in 2000 frames, and the head positions of every pedestrians are labeled, making this dataset more convenient for crowd counting and profiling research. PETS2009 Dataset [106] This dataset contains multi-sensor sequences of different crowd activities. It is composed of five parts: (i) calibration data, (ii) training data, (iii) person count and density estimation data, (iv) people tracking data, and (v) flow analysis and event recognition data. Each subset contains several sequences and each sequence contains different views (4 up to 8). Rodriguez's Web-Collected Dataset The dataset reported in was collected by crawling and downloading videos from search engines and stock footage websites (e.g., Gettyimages and YouTube). In addition to the large collection of crowd videos, the dataset contains ground-truth trajectories for 100 individuals, which were selected randomly from the set of all moving people. This dataset is not open to the public yet. UCF Crowd Behavior Dataset BIB003 This dataset is col- BIB001 , it is mainly designed for crowd behaviors recognition, with ground-truth labels.
Crowded Scene Analysis: A Survey <s> VIII. CONCLUSIONS AND FUTURE DEVELOPMENTS <s> Learning the knowledge of scene structure and tracking a large number of targets are both active topics of computer vision in recent years, which plays a crucial role in surveillance, activity analysis, object classification and etc. In this paper, we propose a novel system which simultaneously performs the Learning-Semantic-Scene and Tracking, and makes them supplement each other in one framework. The trajectories obtained by the tracking are utilized to continually learn and update the scene knowledge via an online un-supervised learning. On the other hand, the learned knowledge of scene in turn is utilized to supervise and improve the tracking results. Therefore, this “adaptive learning-tracking loop” can not only perform the robust tracking in high density crowd scene, dynamically update the knowledge of scene structure and output semantic words, but also ensures that the entire process is completely automatic and online. We successfully applied the proposed system into the JR subway station of Tokyo, which can dynamically obtain the semantic scene structure and robustly track more than 150 targets at the same time. <s> BIB001 </s> Crowded Scene Analysis: A Survey <s> VIII. CONCLUSIONS AND FUTURE DEVELOPMENTS <s> Human group activities detection in multi-camera CCTV surveillance videos is a pressing demand on smart surveillance. Previous works on this topic are mainly based on camera topology inference that is hard to apply to real-world unconstrained surveillance videos. In this paper, we propose a new approach for multi-camera group activities detection. Our approach simultaneously exploits intra-camera and inter-camera contexts without topology inference. Specifically, a discriminative graphical model with hidden variables is developed. The intra-camera and inter-camera contexts are characterized by the structure of hidden variables. By automatically optimizing the structure, the contexts are effectively explored. Furthermore, we propose a new spatiotemporal feature, named vigilant area (VA), to characterize the quantity and appearance of the motion in an area. This feature is effective for group activity representation and is easy to extract from a dynamic and crowded scene. We evaluate the proposed VA feature and discriminative graphical model extensively on two real-world multi-camera surveillance video data sets, including a public corpus consisting of 2.5 h of videos and a 468-h video collection, which, to the best of our knowledge, is the largest video collection ever used in human activity detection. The experimental results demonstrate the effectiveness of our approach. <s> BIB002 </s> Crowded Scene Analysis: A Survey <s> VIII. CONCLUSIONS AND FUTURE DEVELOPMENTS <s> For reasons of public security, an intelligent surveillance system that can cover a large, crowded public area has become an urgent need. In this article, we propose a novel laser-based system that can simultaneously perform tracking, semantic scene learning, and abnormality detection in a fully online and unsupervised way. Furthermore, these three tasks cooperate with each other in one framework to improve their respective performances. The proposed system has the following key advantages over previous ones: (1) It can cover quite a large area (more than 60×35m), and simultaneously perform robust tracking, semantic scene learning, and abnormality detection in a high-density situation. (2) The overall system can vary with time, incrementally learn the structure of the scene, and perform fully online abnormal activity detection and tracking. This feature makes our system suitable for real-time applications. (3) The surveillance tasks are carried out in a fully unsupervised manner, so that there is no need for manual labeling and the construction of huge training datasets. We successfully apply the proposed system to the JR subway station in Tokyo, and demonstrate that it can cover an area of 60×35m, robustly track more than 150 targets at the same time, and simultaneously perform online semantic scene learning and abnormality detection with no human intervention. <s> BIB003 </s> Crowded Scene Analysis: A Survey <s> VIII. CONCLUSIONS AND FUTURE DEVELOPMENTS <s> Cascaded classifiers have been widely used in pedestrian detection and achieved great success. These classifiers are trained sequentially without joint optimization. In this paper, we propose a new deep model that can jointly train multi-stage classifiers through several stages of back propagation. It keeps the score map output by a classifier within a local region and uses it as contextual information to support the decision at the next stage. Through a specific design of the training strategy, this deep architecture is able to simulate the cascaded classifiers by mining hard samples to train the network stage-by-stage. Each classifier handles samples at a different difficulty level. Unsupervised pre-training and specifically designed stage-wise supervised training are used to regularize the optimization problem. Both theoretical analysis and experimental results show that the training strategy helps to avoid over fitting. Experimental results on three datasets (Caltech, ETH and TUD-Brussels) show that our approach outperforms the state-of-the-art approaches. <s> BIB004 </s> Crowded Scene Analysis: A Survey <s> VIII. CONCLUSIONS AND FUTURE DEVELOPMENTS <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB005
We have presented a review of the state-of-the-art techniques for crowded scene analysis across three key aspects: motion pattern segmentation, crowd behavior recognition and anomaly detection. The interested problems have become active research areas in recent decades because of their promising real-world applications. It can be seen that anomaly detection in crowded scenes has attracted quite a lot efforts, which reflects its importance in applications. Similar to these subtopics, feature representation is put as another important section deserving detailed descriptions, since feature representation, as an indispensable basis, is highly correlated with each of the three subtopics. Furthermore, available knowledge of crowd from areas such as crowd dynamics is summarized beforehand. It could provide the fundamental crowd models for many scene analysis algorithms. Although a variety of representation approaches and models have been proposed, at the moment there is still no generally accepted solution to crowded scene analysis tasks. When retrospect the surveyed literatures, we can see two common models promising to visually characterize the crowded scenes: flow field model inspired by physics and generative topic model from machine learning. For flow field model, the crowd is treated as physical fluid and particles, and high density crowd behaves like a complex dynamic system. Many dynamical crowd evolution models have been proposed, along with the concepts of motion field and dynamical potential, borrowed from fluid dynamics community. By treating the moving crowd as a time dependent flow field which consists of regions with qualitatively different dynamics, the motion patterns emerging from the spatiotemporal interactions of the participants can be reflected. For topic model, the basic idea is "a crowded scenes with its various events is a document with mixture of topics". It could be along with different statistical assumptions. The topic model has the ability to automatically discover meaningful events or activities from the visual word co-occurrences. Moreover, it is flexible to utilize various features and other learning algorithms. Although a large amount of works has been done, many issues in crowded scene analysis are still open, and they deserve further research. In the following we list some of the promising topics. Multi-Sensor Information Fusion Crowded scenes often contain severe clutter and object occlusions, which are quite challenging for current visual-based techniques. To fuse information from multi-sensors is always an effective way to reduce the confusion and to improve the accuracy . Visual surveillance of crowded scenes could greatly benefit from the use of multiple sensors, such as audio, radar and laser. Multicamera contexts could also be explored, as revealed by a recent work on group activity analysis BIB002 . By combining the multisensor data, different forms of information can complement to each other, and facilitate the system to obtain an accurate and comprehensive understanding of the scene. Tracking-Learning-Detection Framework Many current video analysis systems perform tracking, learning and detection by simple integration, without considering the interactions between the functional modules. To fully utilize the hierarchical contextual information, it is better for crowded scene analysis systems to simultaneously perform tracking, model learning, and behavior detection in a fully online and unified way. Some works on video surveillance have shown the advantages of the unified framework BIB003 , BIB001 , which should require attention. The tracking module provides motion features, based on which crowd models are learned, and behaviors or events can be detected. On the other hand, crowded scene knowledge can facilitate accurate individuals tracking. For example, a person in particular crowd flow is always influenced by the global motions. Moreover, events or activities can be detected on the basis of tracking results and the learned models. They could also provide contextual knowledge for tracking and learning. A unified tracking-learning-detection framework can use all these contexts to improve these components simultaneously. Deep Learning for Crowded Scene Analysis Though various methods have been proposed on feature extraction and model learning in crowded scene analysis, still there is no public accepted crowded scene representation currently. In recent years, deep learning has achieved great success in several vision tasks related to visual surveillance and scene analysis BIB005 , BIB004 . It has demonstrated its representation learning ability from multiple features. It also could be a prospective solution in crowded scene analysis, given enough training data. How to designate the framework and utilize the power of deep learning in tasks of crowded scene analysis deserve our future efforts. Real-time Processing and Generalization As such an area driven by practical applications, the real-time computation must be considered for the algorithms to work in real life. Current solutions usually target at accurate scene understanding, without considering the computation. Furthermore, many studies in the literature typically evaluate on video data of a specified condition. Research on the effective methods, which can well handle more generalized situations, could also be valuable.
Supervised Classification: Quite a Brief Overview <s> The Bayes Classifier <s> A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> The Bayes Classifier <s> The area under the ROC curve is widely used as a measure of performance of classification rules. However, it has recently been shown that the measure is fundamentally incoherent, in the sense that it treats the relative severities of misclassifications differently when different classifiers are used. To overcome this, Hand (2009) proposed the $H$ measure, which allows a given researcher to fix the distribution of relative severities to a classifier-independent setting on a given problem. This note extends the discussion, and proposes a modified standard distribution for the $H$ measure, which better matches the requirements of researchers, in particular those faced with heavily unbalanced datasets, the $Beta(\pi_1+1,\pi_0+1)$ distribution. [Preprint submitted at Pattern Recognition Letters] <s> BIB002
One should understand that if we know p XY , we are done. In that case, we can construct an optimal classifier C that attains the minimum risk ε * . But how do we get to that classifier? Let C * refer to this optimal classifier, fix the feature vector that we want to label to x, and consider the corresponding term within the integral of Equation (1): As C * should assign x to +1 or −1, we see that the the choice that adds the least to the integral at this feature vector value is the assignment to that class for which p XY is largest. We reach an overall minimum if we stick to this optimal choice in every location , where x is actually on the decision boundary, it does not matter what decision the classifier makes, as it will induce an equally large error. In other words, we can define C * : R d → {−1, +1} as follows: Again using Iverson brackets, we could equally well write this as C A possibly more instructive reformulation is by considering the conditional probabilities p Y |X , often referred to as the posterior probabilities or simply the posteriors, instead of the full probabilities. Equivalent to checking p XY (x, −1) > p XY (x, +1), we can verify whether p Y |X (−1|x) > p Y |X (+1|x) and in the same vein as in Equation (3) decide to assign x to Expanding a bit further on footnote 4, we remark that one of the more important settings, in which another performance measure or another way of evaluating may be appropriate, is in the case where the classification cost per class is different. Equation (1) tacitly assumes that predicting the wrong class incurs a cost of one, while predicting the right class comes at no cost. In many real-world settings, however, making the one error is not as costly as making the other. For instance, building a rotten fruit detector, classifying a fresh piece of fruit as rotten could turn out less costly than classifying a bad piece of fruit as good. When building an actual classifier, life often is even worse as one may not even know what the cost really is that will be incurred by a misclassification. This is one reason to resort to an analysis of the so-called receiver operating characteristic (ROC) curve and its related measure: the area under the ROC curve (AUC, an abbreviation mention already in the previous footnote). This curve and the related area provide, in some sense, tools to study the behavior of classifiers over all possible misclassification costs simultaneously. Another important classification setting is the one in which there is a strong imbalance in class sizes, e.g. where we expect the one class to occur significantly much more often than the other class-a situation easily imagined in various applications. Also here analyses through ROC, AUC, and related techniques are advisable. For more on this topic, the reader is kindly referred to , BIB002 , and related work. BIB001 Of course, if we wish, we generally can decide otherwise on a set of measure 0 without doing any harm to the optimality of the classifier C * . −1 if this is indeed the case and assign it to +1 otherwise. The latter basically states that, given the observations made, one should assign the corresponding object to the class with the largest probability conditioned on those observations. Especially formulated like this, it seems like the obviously optimal assignment strategy. The theoretical constructs ε * and C * are referred to as the Bayes error rate and the Bayes classifier, respectively. The former gives a lower bound on the best error we could ever achieve on the problem at hand. The latter shows us how to make optimal decisions once p XY is known. But these quantities are merely of theoretical importance indeed. In reality, our only hope is to approximate them, as the exact p XY will never be available to us. The objects we can work with are the N draws (x i , y i ) from that same distribution. Based on these examples, we aim to build a classifier that generalizes well to all of p XY . In all that follows in this chapter, this is the setting considered.
Supervised Classification: Quite a Brief Overview <s> Discriminative Probabilistic Classifiers <s> A major goal of research on networks of neuron-like processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way that internal units which are not part of the input or output come to represent important features of the task domain. Several interesting gradient-descent procedures have recently been discovered. Each connection computes the derivative, with respect to the connection strength, of a global measure of the error in the performance of the network. The strength is then adjusted in the direction that decreases the error. These relatively simple, gradient-descent learning procedures work well for small tasks and the new challenge is to find ways of improving their convergence rate and their generalization abilities so that they can be applied to larger, more realistic tasks. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Discriminative Probabilistic Classifiers <s> The goal of pattern classification can be approached from two points of view: informative - where the classifier learns the class densities, or discriminative - where the focus is on learning the class boundaries without regard to the underlying class densities. We review and synthesize the tradeoffs between these two approaches for simple classifiers, and extend the results to modern techniques such as Naive Bayes and Generalized Additive Models. Data mining applications often operate in the domain of high dimensional features where the tradeoffs between informative and discriminative classifiers are especially relevant. Experimental results are provided for simulated and real data. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Discriminative Probabilistic Classifiers <s> Motivation: Microarray classification typically possesses two striking attributes: (1) classifier design and error estimation are based on remarkably small samples and (2) cross-validation error estimation is employed in the majority of the papers. Thus, it is necessary to have a quantifiable understanding of the behavior of cross-validation in the context of very small samples. ::: ::: Results: An extensive simulation study has been performed comparing cross-validation, resubstitution and bootstrap estimation for three popular classification rules---linear discriminant analysis, 3-nearest-neighbor and decision trees (CART)---using both synthetic and real breast-cancer patient data. Comparison is via the distribution of differences between the estimated and true errors. Various statistics for the deviation distribution have been computed: mean (for estimator bias), variance (for estimator precision), root-mean square error (for composition of bias and variance) and quartile ranges, including outlier behavior. In general, while cross-validation error estimation is much less biased than resubstitution, it displays excessive variance, which makes individual estimates unreliable for small samples. Bootstrap methods provide improved performance relative to variance, but at a high computational cost and often with increased bias (albeit, much less than with resubstitution). ::: ::: Availability and Supplementary information: A companion web site can be accessed at the URL http://ee.tamu.edu/~edward/cv_paper. The companion web site contains: (1) the complete set of tables and plots regarding the simulation study; (2) additional figures; (3) a compilation of references for microarray classification studies and (4) the source code used, with full documentation and examples. <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> Discriminative Probabilistic Classifiers <s> Suppose you are given a dataset of pairs (x, c) where c is a class variable and x is a vector of features. Given a new x, you want to predict its class. The generative i.i.d. approach to this problem posits a model family p(x, c | θ) = p(x | c, λ)p(c | π) (1) and chooses the best parameters θ = {λ, π} by maximizing (or integrating over) the joint distribution (where D denotes the data): p(D, θ) = p(θ) ∏ <s> BIB004 </s> Supervised Classification: Quite a Brief Overview <s> Discriminative Probabilistic Classifiers <s> We give a basic introduction to Gaussian Process regression models. We focus on understanding the role of the stochastic process and how it is used to define a distribution over functions. We present the simple equations for incorporating training data and examine how to learn the hyperparameters using the marginal likelihood. We explain the practical advantages of Gaussian Process and end with conclusions and a look at the current trends in GP work. <s> BIB005
In the previous subsection, we decided to model p XY , based on which we can then come to a decision on whether to assign o i to the + or the − class considering its corresponding feature vector x i . Subsection 2.2, however, showed that we might as well use a model of p Y |X to reach a decision. Of course, from the full model p XY , we can get to the conditional p Y |X , while going into the other direction is not possible. For classification, however, we merely need to know p Y |X and so we can save the trouble of building a full model. In fact, if we are unsure about the true form of the underlying class-conditionals or the marginal p X that describes the feature vector distribution, directly modeling p Y |X may be wise, as we can avoid potential problems due to such model misspecification. On the other hand, if the full model is accurate enough this may have a positive effect on the classifier's performance BIB002 . Approaches that directly model p Y |X are called discriminative as they aim to get straightaway to the information that matters to tell the one class apart from the other. The classical model in this setting, and in a sense a counterpart of LDA, is called logistic regression BIB003 . One way to get to this model is to assume that the logarithm of the so-called posterior odds ratio takes on a linear form in x, i.e., with w ∈ R d and w • ∈ R. From this we derive that the posterior for the positive class takes on the following form: The parameters w and w • again are typically estimated by maximizing the log-likelihood. Formally, we have to consider the likelihood of the full model and not only of its posterior, but the choice of the necessary additional marginal model for p X is of no influence on the optimum of the parameters we are interested in BIB004 and so we may just consider Note that, like LDA, this classifier is linear as well. Generally, the decision boundary is located at the x for which p XY (x, +1) = p XY (x, −1) or, similarly, for which p Y |X (+1|x) = p Y |X (−1|x). But the latter case implies that the log-odds equals 0 and so the decision boundary takes on the formŵ As for generative models, discriminative probabilistic ones come in all different kinds of flavors. A particularly popular and fairly general form of turning linear classifiers into nonlinear ones is discussed in Subsection 3.1. These and more variations can be found, among others, in BIB001 BIB005 .
Supervised Classification: Quite a Brief Overview <s> 0-1 Loss <s> It is well known that (McCulloch-Pitts) neurons are efficiently trainable to learn an unknown halfspace from examples, using linear-programming methods. We want to analyze how the learning performance degrades when the representational power of the neuron is overstrained, i.e., if more complex concepts than just halfspaces are allowed. We show that the problem of learning a probably almost optimal weight vector for a neuron is so difficult that the minimum error cannot even be approximated to within a constant factor in polynomial time (unless RP = NP); we obtain the same hardness result for several variants of this problem. We considerably strengthen these negative results for neurons with binary weights 0 or 1. We also show that neither heuristical learning nor learning by sigmoidal neurons with a constant reject rate is efficiently possible (unless RP = NP). <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> 0-1 Loss <s> We address the computational complexity of learning in the agnostic framework. For a variety of common concept classes we prove that, unless P = NP, there is no polynomial time approximation scheme for finding a member in the class that approximately maximizes the agreement with a given training sample. In particular our results apply to the classes of monomials, axis-aligned hyper-rectangles, closed balls and monotone monomials. For each of these classes, we prove the NP-hardness of approximating maximal agreement to within some fixed constant (independent of the sample size and of the dimensionality of the sample space). For the class of half-spaces, we prove that, for any e > 0, it is NP-hard to approximately maximize agreements to within a factor of (418/415 - e), improving on the best previously known constant for this problem, and using a simpler proof. An interesting feature of our proofs is that, for each of the classes we discuss, we find patterns of training examples that, while being hard for approximating agreement within that concept class, allow efficient agreement maximization within other concept classes. These results bring up a new aspect of the model selection problem--they imply that the choice of hypothesis class for agnostic learning from among those considered in this paper can drastically effect the computational complexity of the learning process. <s> BIB002
In a way, the obvious choice is to take the loss that we are actually interested in: the fraction of misclassified observations. Equation (1) defines this fraction, i.e., the classification error or the 0-1 risk, under the true distribution. Considering our finite number N of training data, the best we can do is just count the number of incorrectly assigned samples: being the 0-1 loss 11 . A major problem with this loss is that finding the optimal h * is for many cases computationally very hard. Take for H, for example, all linear functions, then finding our h * turns out to be NP-hard and even settling for approximate solutions does not necessarily help BIB002 BIB001 .
Supervised Classification: Quite a Brief Overview <s> Particular Surrogate Losses <s> A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Particular Surrogate Losses <s> A comprehensive look at learning and generalization theory. The statistical theory of learning and generalization concerns the problem of choosing desired functions on the basis of empirical data. Highly applicable to a variety of computer science and robotics fields, this book offers lucid coverage of the theory as a whole. Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Particular Surrogate Losses <s> The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. ::: ::: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> Particular Surrogate Losses <s> In this letter we discuss a least squares version for support vector machine (SVM) classifiers. Due to equality type constraints in the formulation, the solution follows from solving a set of linear equations, instead of quadratic programming for classical SVM‘s. The approach is illustrated on a two-spiral benchmark classification problem. <s> BIB004 </s> Supervised Classification: Quite a Brief Overview <s> Particular Surrogate Losses <s> Abstract Learning is key to developing systems tailored to a broad range of data analysis and information extraction tasks. We outline the mathematical foundations of learning theory and describe a key algorithm of it. <s> BIB005
As we would decide on the label of a sample x based on the output h(x) that a trained classifier h ∈ H provides, one typically only needs to consider what the loss function does for the value yh(x). For instance, we can rewrite ℓ 0-1 (a, b) as ℓ 0-1 (a, b) = ℓ 0-1 (ba) = [b sign(a) = 1] and achieve the same loss. In Figure 1 the shape of the 0-1 loss is plotted in these terms. The same figure shows various widely-used upper bounds for ℓ 0-1 . Maybe the first one to note is the logistic loss, which is defined as The figure displays it as the solid light gray curve. Using this loss, in combination with a linear hypothesis class, leads to standard logistic regression as introduced in Subsection 2.4. So in this case, we have both a probabilistic view of the resulting classifier as well as an interpretation of logistic regression as minimizer of a specific surrogate loss. A second well-known classifier, or at least a basic form of it, is obtained by using the so-called hinge loss: This loss is at the basis of the support vector machine 12 (SVM) BIB001 BIB003 BIB002 . A third classifier that fits the general formalism and is widely employed is obtained by using the squared loss function Using again the set of linear hypotheses, we get basically what is, among others, referred to as the linear regression classifier, the least squares classifier, the least squares support vector machine, the Fisher classifier, or Fisher's linear discriminant BIB005 BIB004 . Indeed, this classifier is a reinterpretation of the classical decision function introduced by Fisher in the language of losses and hypotheses. Finally, other losses one may encounter in the literature are the exponential loss exp(−ba), the truncated squared loss max(1−ba, 0) 2 , and the absolute loss |1−ba|. In Subsection 3.1, we introduce ways of designing nonlinear classifiers, which often rely on the same formalism as presented in this subsection.
Supervised Classification: Quite a Brief Overview <s> Neural Networks <s> A major goal of research on networks of neuron-like processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way that internal units which are not part of the input or output come to represent important features of the task domain. Several interesting gradient-descent procedures have recently been discovered. Each connection computes the derivative, with respect to the connection strength, of a global measure of the error in the performance of the network. The strength is then adjusted in the direction that decreases the error. These relatively simple, gradient-descent learning procedures work well for small tasks and the new challenge is to find ways of improving their convergence rate and their generalization abilities so that they can be applied to larger, more realistic tasks. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Neural Networks <s> The premise of this article is that learning procedures used to train artificial neural networks are inherently statistical techniques. It follows that statistical theory can provide considerable insight into the properties, advantages, and disadvantages of different network learning methods. We review concepts and analytical results from the literatures of mathematical statistics, econometrics, systems identification, and optimization theory relevant to the analysis of learning in artificial neural networks. Because of the considerable variety of available learning procedures and necessary limitations of space, we cannot provide a comprehensive treatment. Our focus is primarily on learning procedures for feedforward networks. However, many of the concepts and issues arising in this framework are also quite broadly relevant to other network learning paradigms. In addition to providing useful insights, the material reviewed here suggests some potentially useful new training methods for artificial neural ne... <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Neural Networks <s> Preface * Introduction * The Bayes Error * Inequalities and alternatedistance measures * Linear discrimination * Nearest neighbor rules *Consistency * Slow rates of convergence Error estimation * The regularhistogram rule * Kernel rules Consistency of the k-nearest neighborrule * Vapnik-Chervonenkis theory * Combinatorial aspects of Vapnik-Chervonenkis theory * Lower bounds for empirical classifier selection* The maximum likelihood principle * Parametric classification *Generalized linear discrimination * Complexity regularization *Condensed and edited nearest neighbor rules * Tree classifiers * Data-dependent partitioning * Splitting the data * The resubstitutionestimate * Deleted estimates of the error probability * Automatickernel rules * Automatic nearest neighbor rules * Hypercubes anddiscrete spaces * Epsilon entropy and totally bounded sets * Uniformlaws of large numbers * Neural networks * Other error estimates *Feature extraction * Appendix * Notation * References * Index <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> Neural Networks <s> A comprehensive look at learning and generalization theory. The statistical theory of learning and generalization concerns the problem of choosing desired functions on the basis of empirical data. Highly applicable to a variety of computer science and robotics fields, this book offers lucid coverage of the theory as a whole. Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more. <s> BIB004 </s> Supervised Classification: Quite a Brief Overview <s> Neural Networks <s> Restricted Boltzmann machines were developed using binary stochastic hidden units. These can be generalized by replacing each binary unit by an infinite number of copies that all have the same weights but have progressively more negative biases. The learning and inference rules for these "Stepped Sigmoid Units" are unchanged. They can be approximated efficiently by noisy, rectified linear units. Compared with binary units, these units learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset. Unlike binary units, rectified linear units preserve information about relative intensities as information travels through multiple layers of feature detectors. <s> BIB005 </s> Supervised Classification: Quite a Brief Overview <s> Neural Networks <s> In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. <s> BIB006
The use of artificial neural networks for supervised learning can be traced back at least to 1958. In that year, the perceptron was introduced , providing a linear classifier that could be trained using a basic iterative updating scheme for its parameters. Currently, neural networks are the dominant technique in many applications and application related areas when massively sized data sets to train from are available. Even though the original formulation of the perceptron does not give a direct interpretation in terms of the solution to the optimizing of a particular loss within its hypothesis space of linear classifiers, such formulations are possible BIB003 . Neural networks that are employed nowadays readily follow the earlier sketched loss-hypothesis space paradigm. Possibly the most characteristic feature of neural networks is that the hypotheses considered are built up of relatively simple computational units called neurons or nodes. Such unit is a function g : R q → R that takes in q inputs and maps these to a single numerical output. Typically, g takes on the form of a linear mapping followed by a nonlinear transfer or activation function σ : R → R: Often σ is taken to be of sigmoidal shape, like the logistic function in Equation (13) Together with the underlying theory, SVMs caused all the furore in the late 1990s and early 2000s. To many, the development of the SVM may still be one of the prime achievements of the mathematical field of statistical learning theory that started with Vapnik and Chervonenkis in the early 1970s. At least, SVMs are still one of the most widely known and used classifiers within the fields of pattern recognition and machine learning. Possibly one of the main feats of statistical learning theory was that it broke with the statistical tradition of merely studying the theory of the asymptotic behavior of estimators. Statistical learning theory is also concerned with the finite sample setting and makes, for instance, statements on the expected performance on unseen data for classifiers that have been trained on a limited set of examples BIB003 BIB004 . smooth threshold function. Other choices are possible however. A choice popularized more recently, with a clearly different characteristic is the so-called rectified linear unit, which is defined as σ(x) = max(0, x) BIB005 . As for various of the previously mentioned classifiers, the free parameters are tuned by measuring how well g fits the given training data. A widely used choice is the squared loss, but also likelihood based methods have been considered and links with probabilistic models have been studied BIB001 . One should realize that, whatever the choice of activation function, as long as it is monotonic using g for classification will lead to a linear classifier. Nonlinear classifiers are constructed by combining various gs, both in parallel and in sequence. In this way, one can build arbitrarily large networks that can perform arbitrarily complex input-output mappings. This means that we are dealing with large and diverse hypothesis classes H. The general construction is that multiple nodes, connected in parallel, provide the inputs to subsequent nodes. Consider, for instance, the nonlinear extension where, instead of a single node g, as a first step, we have multiple nodes g 1 , . . . , g D that all receive the same feature vectors x as input. In a second step, these D outputs are collected by yet another node, g : R D → R and transformed in a similar kind of way. So, all in all, we get a function G of the form To fully specify a particular G, one needs to set all the parameters in all D + 1 nodes. Once these are set, we can again use it to classify any x to the sign of G(x). Of course, one does not have to stop at two steps. The network can have an arbitrary number of steps, or layers as they are typically referred to. Nowadays, so called deep 13 networks are being employed with hundreds of layers and millions of parameters. In addition to this, there are generally many different variations to the basic scheme we have sketched here BIB006 . By making different choices for the transfer function, by using multiple transfer functions, by changing the structure of the network, the number of nodes per layer, etc., one basically changes the hypothesis class that is considered. In addition, where in Subsection 2.5 the choice of H and ℓ would typically be such that we end up with a convex optimization problem, using neural networks, we typically move away from optimization problems for which one can reasonably expect to be able to find the global optimum. As a result, to fully define the classifier, we should not only specify the loss and the hypothesis class, but also the exact optimization procedure that is employed. There are many choices possible to carry out the optimization, but most approaches rely on gradient descent or variations to this basic scheme BIB001 BIB002 .
Supervised Classification: Quite a Brief Overview <s> k Nearest Neighbors <s> The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points. This rule is independent of the underlying joint distribution on the sample points and their classifications, and hence the probability of error R of such a rule must be at least as great as the Bayes probability of error R^{\ast} --the minimum probability of error over all decision rules taking underlying probability structure into account. However, in a large sample analysis, we will show in the M -category case that R^{\ast} \leq R \leq R^{\ast}(2 --MR^{\ast}/(M-1)) , where these bounds are the tightest possible, for all suitably smooth underlying distributions. Thus for any number of categories, the probability of error of the nearest neighbor rule is bounded above by twice the Bayes probability of error. In this sense, it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> k Nearest Neighbors <s> Abstract : The discrimination problem (two population case) may be defined as follows: e random variable Z, of observed value z, is distributed over some space (say, p-dimensional) either according to distribution F, or according to distribution G. The problem is to decide, on the basis of z, which of the two distributions Z has. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> k Nearest Neighbors <s> Preface * Introduction * The Bayes Error * Inequalities and alternatedistance measures * Linear discrimination * Nearest neighbor rules *Consistency * Slow rates of convergence Error estimation * The regularhistogram rule * Kernel rules Consistency of the k-nearest neighborrule * Vapnik-Chervonenkis theory * Combinatorial aspects of Vapnik-Chervonenkis theory * Lower bounds for empirical classifier selection* The maximum likelihood principle * Parametric classification *Generalized linear discrimination * Complexity regularization *Condensed and edited nearest neighbor rules * Tree classifiers * Data-dependent partitioning * Splitting the data * The resubstitutionestimate * Deleted estimates of the error probability * Automatickernel rules * Automatic nearest neighbor rules * Hypercubes anddiscrete spaces * Epsilon entropy and totally bounded sets * Uniformlaws of large numbers * Neural networks * Other error estimates *Feature extraction * Appendix * Notation * References * Index <s> BIB003
The nearest neighbor rule BIB001 BIB002 is maybe the classifier with the most intuitive appeal. It is a widely used and classical decision rule and one of the earliest nonparametric classifiers proposed. In order to classify a new and unseen object, one simply determines the distances between its describing feature vector and the feature vectors in the training set and decides to assign the object to the same class the closest feature vector in that training set has. Most often, the Euclidean distance is used to determine the nearest neighbor in the training data set, but in principle any other, possibly even expert-designed or learned, distance measure can be employed. A direct, yet worthwhile extension is to not only consider the closest sample in the training set, i.e., the first nearest neighbor, but to consider the closest k and assign any unseen object to the class that occurs most often among these k nearest neighbors. The k nearest neighbor classifier, has various nice and interesting properties BIB001 BIB003 BIB002 . One of the more interesting ones may be the result that roughly states that with increasing numbers of training data, the k nearest neighbor classifier converges to the Bayes classifier C * , given k increases at the appropriate rate.
Supervised Classification: Quite a Brief Overview <s> Multiple Classifier Systems <s> Decision trees are attractive classifiers due to their high execution speed. But trees derived with traditional methods often cannot be grown to arbitrary complexity for possible loss of generalization accuracy on unseen data. The limitation on complexity usually means suboptimal accuracy on training data. Following the principles of stochastic modeling, we propose a method to construct tree-based classifiers whose capacity can be arbitrarily expanded for increases in accuracy for both training and unseen data. The essence of the method is to build multiple trees in randomly selected subspaces of the feature space. Trees in, different subspaces generalize their classification in complementary ways, and their combined classification can be monotonically improved. The validity of the method is demonstrated through experiments on the recognition of handwritten digits. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Classifier Systems <s> Much of previous attention on decision trees focuses on the splitting criteria and optimization of tree sizes. The dilemma between overfitting and achieving maximum accuracy is seldom resolved. A method to construct a decision tree based classifier is proposed that maintains highest accuracy on training data and improves on generalization accuracy as it grows in complexity. The classifier consists of multiple trees constructed systematically by pseudorandomly selecting subsets of components of the feature vector, that is, trees constructed in randomly chosen subspaces. The subspace method is compared to single-tree classifiers and other forest construction methods by experiments on publicly available datasets, where the method's superiority is demonstrated. We also discuss independence between trees in a forest and relate that to the combined classification accuracy. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Classifier Systems <s> When more than a single classifier has been trained for the same recognition problem the question arises how this set of classifiers may be combined into a final decision rule. Several fixed combining rules are used that depend on the output values of the base classifiers only. They are almost always suboptimal. Usually, however, training sets are available. They may be used to calibrate the base classifier outputs, as well as to build a trained combining classifier using these outputs as inputs. It depends on various circumstances whether this is useful, in particular whether the training set is used for the base classifiers as well and whether they are overtrained. We present an intuitive discussion on the use of trained combiners, relating the question of the choice of the combining classifier to a similar choice in the area of dissimilarity based pattern recognition. Some simple examples are used to illustrate the discussion. <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Classifier Systems <s> This paper deals with the concept of information unification and its application to the contextual pattern recognition task. The concept of the recognition and the rule-based algorithm with learning, based on the probabilistic model is presented. The machine learning algorithm based on statistical tests for the recognition of controlled Markov chains is shown. Idea of information unification via transforming the expert rules into the learning set is derived. <s> BIB004 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Classifier Systems <s> Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy. <s> BIB005 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Classifier Systems <s> We describe a new sequential learning scheme called "stacked sequential learning". Stacked sequential learning is a meta-learning algorithm, in which an arbitrary base learner is augmented so as to make it aware of the labels of nearby examples. We evaluate the method on several "sequential partitioning problems", which are characterized by long runs of identical labels. We demonstrate that on these problems, sequential stacking consistently improves the performance of nonsequential base learners; that sequential stacking often improves performance of learners (such as CRFs) that are designed specifically for sequential tasks; and that a sequentially stacked maximum-entropy learner generally outperforms CRFs. <s> BIB006 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Classifier Systems <s> This book, which is wholly devoted to the subject of model combination, is divided into ten chapters. In addition to the first two introductory chapters, the book covers some of the following topics: multiple classifier systems; combination methods when the base classifier outputs are 0/1; methods when the outputs are continuous, e.g., posterior probabilities; methods for classifier selection; bagging and boosting; the theory of fixed combination rules; and the concept of diversity. Overall, it is a very well-written monograph. It explains and analyzes different approaches comparatively so that the reader can see how they are similar and how they differ. The literature survey is extensive. The MATLAB code for many methods is given in chapter appendices allowing readers to play with the explained methods or apply them quickly to their own data. The book is a must-read for researchers and practitioners alike. <s> BIB007 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Classifier Systems <s> We analyze the application of ensemble learning to recommender systems on the Netflix Prize dataset. For our analysis we use a set of diverse state-of-the-art collaborative filtering (CF) algorithms, which include: SVD, Neighborhood Based Approaches, Restricted Boltzmann Machine, Asymmetric Factor Model and Global Effects. We show that linearly combining (blending) a set of CF algorithms increases the accuracy and outperforms any single CF algorithm. Furthermore, we show how to use ensemble methods for blending predictors in order to outperform a single blending algorithm. The dataset and the source code for the ensemble blending are available online. <s> BIB008 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Classifier Systems <s> Computer-aided detection (CAD) is increasingly used in clinical practice and for many applications a multitude of CAD systems have been developed. In practice, CAD systems have different strengths and weaknesses and it is therefore interesting to consider their combination. In this paper, we present generic methods to combine multiple CAD systems and investigate what kind of performance increase can be expected. Experimental results are presented using data from the ANODE09 and ROC09 online CAD challenges for the detection of pulmonary nodules in computed tomography scans and red lesions in retinal images, respectively. For both applications, combination results in a large and significant increase in performance when compared to the best individual CAD system. <s> BIB009 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Classifier Systems <s> Occlusion boundaries contain rich perceptual information about the underlying scene structure. They also provide important cues in many visual perception tasks such as scene understanding, object recognition, and segmentation. In this paper, we improve occlusion boundary detection via enhanced exploration of contextual information (e.g., local structural boundary patterns, observations from surrounding regions, and temporal context), and in doing so develop a novel approach based on convolutional neural networks (CNNs) and conditional random fields (CRFs). Experimental results demonstrate that our detector significantly outperforms the state-of-the-art (e.g., improving the F-measure from 0.62 to 0.71 on the commonly used CMU benchmark). Last but not least, we empirically assess the roles of several important components of the proposed detector, so as to validate the rationale behind this approach. <s> BIB010 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Classifier Systems <s> Existing methods for pixel-wise labelling tasks generally disregard the underlying structure of labellings, often leading to predictions that are visually implausible. While incorporating structure into the model should improve prediction quality, doing so is challenging - manually specifying the form of structural constraints may be impractical and inference often becomes intractable even if structural constraints are given. We sidestep this problem by reducing structured prediction to a sequence of unconstrained prediction problems and demonstrate that this approach is capable of automatically discovering priors on shape, contiguity of region predictions and smoothness of region contours from data without any a priori specification. On the instance segmentation task, this method outperforms the state-of-the-art, achieving a mean $\mathrm{AP}^{r}$ of 63.6% at 50% overlap and 43.3% at 70% overlap. <s> BIB011 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Classifier Systems <s> The field of object detection has seen dramatic performance improvements in the last few years. Most of these gains are attributed to bottom-up, feedforward ConvNet frameworks. However, in case of humans, top-down information, context and feedback play an important role in doing object detection. This paper investigates how we can incorporate top-down information and feedback in the state-of-the-art Faster R-CNN framework. Specifically, we propose to: (a) augment Faster R-CNN with a semantic segmentation network; (b) use segmentation for top-down contextual priming; (c) use segmentation to provide top-down iterative feedback using two stage training. Our results indicate that all three contributions improve the performance on object detection, semantic segmentation and region proposal generation. <s> BIB012
The terms multiple classifier systems, classifier combining, and ensemble methods all refer to roughly the same idea: potentially more powerful classifiers can be built by combining two or more of them BIB007 . The latter are often referred to as base classifiers. So, these techniques are not classifiers as such, but ways to compile base classifiers into classifiers that in some sense befit the data better. There can be various reasons to combine classifiers. Sometimes a classifier turns out to be overly flexible and one may wish to stabilize the base classifier (see also Section 5). One way to do so is by a well-known combining technique called bagging BIB005 , which trains various classifiers based on bootstrap samples of the same data set and assigns any new sample based on the average output of this often large set of base classifiers. Another way to construct different base classifiers is to consider random subspaces by sampling a set of different features for every base learner. This technique has been extensively exploited in random forests and the like BIB001 BIB002 . Combining classifiers can also be exploited when dealing with a problem where, in some sense, essentially different sets of features play a role. For instance, in the analysis of patient data, one might want to use different classifiers for high-dimensional genetic measurements and low-dimensional clinical data, as these sets may behave rather differently from each other. Once the two or more specialized classifiers have been trained, various forms of socalled fixed and trained combiners can be applied to come to a final decision rule BIB003 BIB007 . At times, the base classifiers can already be quite complex, possibly being a multiple classifier system in itself. Nice examples are available from medical image analysis BIB009 and recommender systems BIB008 15 . In these cases, advanced systems have been developed independently from each other. As a result, there is a fair chance that every systems has its own strengths and weaknesses and even the best performing individual system cannot be expected to perform the best in every part of feature space. Hence combining such systems can result in significantly improved overall performance. Another reason to employ classifier combining is to integrate contextual features into the classification process. Such approaches can especially be beneficial when integrating contextual information into image and signal analysis tasks BIB006 BIB004 . These techniques can be seen as a specific form of stacked generalization or stacking and are becoming relevant again these days in the connection with deep learning (see, for instance, BIB010 BIB011 BIB012 ). Finally, we should mention boosting approaches to multiple classifier systems and in particular adaboost . Boosting was initially studied in a more theoretical setting to show that so-called weak learners, i.e., classifiers that barely reach better performance than an error rate equal to the a priori probability of the largest class, could be combined into a strong learner to significantly improve performance over the weak ones. This research culminated in the development of a combining technique that sequentially adds base classifiers to the ensemble that has already been constructed, where the next base classifier focuses especially on samples that previous base learners were unable to correctly classify. This last feature is the adaptive characteristic of this particular combining scheme that warrants the prefix ada-.
Supervised Classification: Quite a Brief Overview <s> The Kernel Trick <s> Foreword 1. Background 2. More splines 3. Equivalence and perpendicularity, or, what's so special about splines? 4. Estimating the smoothing parameter 5. 'Confidence intervals' 6. Partial spline models 7. Finite dimensional approximating subspaces 8. Fredholm integral equations of the first kind 9. Further nonlinear generalizations 10. Additive and interaction splines 11. Numerical methods 12. Special topics Bibliography Author index. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> The Kernel Trick <s> A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> The Kernel Trick <s> The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89% correct predictions on a musk odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms. <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> The Kernel Trick <s> A comprehensive look at learning and generalization theory. The statistical theory of learning and generalization concerns the problem of choosing desired functions on the basis of empirical data. Highly applicable to a variety of computer science and robotics fields, this book offers lucid coverage of the theory as a whole. Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more. <s> BIB004 </s> Supervised Classification: Quite a Brief Overview <s> The Kernel Trick <s> Wahba's classical representer theorem states that the solutions of certain risk minimization problems involving an empirical risk term and a quadratic regularizer can be written as expansions in terms of the training examples. We generalize the theorem to a larger class of regularizers and empirical risk terms, and give a self-contained proof utilizing the feature space associated with a kernel. The result shows that a wide range of problems have optimal solutions that live in the finite dimensional span of the training examples mapped into feature space, thus enabling us to carry out kernel algorithms independent of the (potentially infinite) dimensionality of the feature space. <s> BIB005 </s> Supervised Classification: Quite a Brief Overview <s> The Kernel Trick <s> We describe a new sequential learning scheme called "stacked sequential learning". Stacked sequential learning is a meta-learning algorithm, in which an arbitrary base learner is augmented so as to make it aware of the labels of nearby examples. We evaluate the method on several "sequential partitioning problems", which are characterized by long runs of identical labels. We demonstrate that on these problems, sequential stacking consistently improves the performance of nonsequential base learners; that sequential stacking often improves performance of learners (such as CRFs) that are designed specifically for sequential tasks; and that a sequentially stacked maximum-entropy learner generally outperforms CRFs. <s> BIB006
SVMs not only received a lot of attention as a result of statistical learning theory, the SVM literature also introduced what has become widely known as the kernel trick or kernel method BIB002 , which has its roots in the 1960s . The kernel trick allows one to extend many inherently linear approaches to nonlinear ones in a computationally simple way. At its basis is that-following the representer theorem BIB005 BIB001 -many solutions for the type of optimization problems for linear classifiers that we have considered in Subsection 2.5 can be expressed in terms of a weighted combination of inner products of training feature vectors and the x that is being classified, i.e., with a i ∈ R. Therefore, finding h * becomes equivalent to finding the optimal coefficients a i . After mapping the original feature vectors with ϕ, we would be optimizing the equivalent in the D-dimensional space to get to a possibly nonlinear classifier: It becomes apparent that the only thing that matters in these settings is that we know how to compute inner products k(z, x) := ϕ(z) ⊤ ϕ(x) between any two mapped feature vectors x and z. The function k is also referred to as a kernel function or simply a kernel. Of course, once we have explicitly defined ϕ, we can always construct the corresponding kernel functions, but the power of the kernel trick is that in many settings this can be avoided. This is interesting in at least two ways. The first one is that if one wants to construct highly nonlinear classifiers, the explicit expansion ϕ could grow inconveniently large. Take a simple expansion in which we consider all (unique) second degree monomials, which number equals . So the dimensionality D of the feature space in which we have to take the inner product grows as O(d 2 ). By a direct calculation, one can however show that the inner product in this larger space can be expressed in terms of a much simpler k. In this case particularly, we have that As one can imagine, moving to nonlinearities of even higher degree, the effect becomes more pronounced BIB006 . At some point, explicitly expanding the feature vector nonlinearly becomes prohibitive, while calculating the induced inner product may still be easy to do. An extreme example is the radial basis function or Gaussian kernel defined by which corresponds to a mapping that takes the original d-dimensional space to an infinite dimensional expansion BIB004 . A second reason why the formulation in terms of inner products is of interest is that it, in principle, allows us to forget about an explicit feature representation altogether. Going back to our original objects o i , if we can construct a function k(·, ·) → R + 0 that takes in two objects and fulfils all the criteria of an kernel, we can directly use k(o i , o) (with o the object that we want to classify) as a substitute for ϕ(x i ) ⊤ ϕ(x) in Equation BIB003 . Once such a kernel function k has been constructed-whether it is through an explicit feature space or not, one can use it to build classifiers. All in all, kernel methods define a very general, powerful, and flexible formalism, which allows the design of problem specific kernels. Research into this direction has spawned a massive amount of publications about such approaches (see, for instance, ).
Supervised Classification: Quite a Brief Overview <s> Dissimilarity Representation <s> For many types of machine learning algorithms, one can compute the statistically "optimal" way to select training data. In this paper, we review how optimal data selection techniques have been used with feedforward neural networks. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are computationally expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate. Empirically, we observe that the optimality criterion sharply decreases the number of training examples the learner needs in order to achieve good performance. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Dissimilarity Representation <s> The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89% correct predictions on a musk odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Dissimilarity Representation <s> Usually, objects to be classified are represented by features. In this paper, we discuss an alternative object representation based on dissimilarity values. If such distances separate the classes well, the nearest neighbor method offers a good solution. However, dissimilarities used in practice are usually far from ideal and the performance of the nearest neighbor rule suffers from its sensitivity to noisy examples. We show that other, more global classification techniques are preferable to the nearest neighbor rule, in such cases.For classification purposes, two different ways of using generalized dissimilarity kernels are considered. In the first one, distances are isometrically embedded in a pseudo-Euclidean space and the classification task is performed there. In the second approach, classifiers are built directly on distance kernels. Both approaches are described theoretically and then compared using experiments with different dissimilarity measures and datasets including degraded data simulating the problem of missing values. <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> Dissimilarity Representation <s> We describe a new sequential learning scheme called "stacked sequential learning". Stacked sequential learning is a meta-learning algorithm, in which an arbitrary base learner is augmented so as to make it aware of the labels of nearby examples. We evaluate the method on several "sequential partitioning problems", which are characterized by long runs of identical labels. We demonstrate that on these problems, sequential stacking consistently improves the performance of nonsequential base learners; that sequential stacking often improves performance of learners (such as CRFs) that are designed specifically for sequential tasks; and that a sequentially stacked maximum-entropy learner generally outperforms CRFs. <s> BIB004 </s> Supervised Classification: Quite a Brief Overview <s> Dissimilarity Representation <s> # Spaces # Characterization of Dissimilarities # Learning Approaches # Dissimilarity Measures # Visualization # Further Data Exploration # One-Class Classifiers # Classification # Combining # Representation Review and Recommendations # Conclusions and Open Problems <s> BIB005
Any kernel k provides, in a way, a similarity measure between two feature observations x and z (or possible directly between two objects): the larger the value is the more similar the two observations are. As k has to act like an inner product that, at least implicitly, corresponds to some underlying feature space, limitations apply. In many settings, one might actually have an idea of a proper way to measure the similarity or the, in some sense equivalent, dissimilarity between two objects BIB001 . Possibly, such measure is provided This can be demonstrated by explicitly writing out both sides of the equation. BIB004 We note that first degree monomials can also be included, either by explicitly including an additional feature to the original feature vector that is constant, say c, or implicitly by defining the inner product as (z ⊤ x + c 2 ) 2 . BIB001 Depending on the requirements one imposes upon dissimilarities (or, proximities, distances, etc.), similarities s can be turned into dissimilarities δ. For instance, by taking δ = 1 s or δ = −s. Next to these very basic transforms, there are various more advanced possibilities to construct such conversions BIB005 . by an expert that is working in the field where you are asked to build your classifier for. It therefore may be expected to be a well thought-through quantity that captures the essential resemblance of or difference between two objects. The dissimilarity approach BIB005 BIB003 ] allows one to build classifiers similar to kernelbased classifiers, but without some of the restrictions. One of the core ideas is that every objects can be represented, not by what one can see as absolute measurements that can be performed on every individual object, but rather by relative measurements that tells us how (dis)similar the object of interest is with a set of D representative objects. These representative objects are also referred to as the prototypes. In particular, having such a set of prototypes p i with i ∈ {1, . . . D}, and having our favorite dissimilarity measure δ, every object o can be represented by the D-dimensional dissimilarity vector Training, for instance, a linear classifier in this space leads to a hypothesis of the form which should be compared to Equation BIB002 . The linear classifier is just one example of a classifier one can use in these D dimensions. In this dissimilarity space, one can of course use the full range of classifiers that have been introduced in this chapter.
Supervised Classification: Quite a Brief Overview <s> Feature Curves and the Curse of Dimensionality <s> Training classifiers on large databases is computationally demanding. It is desirable to develop efficient procedures for a reliable prediction of a classifier's suitability for implementing a given task, so that resources can be assigned to the most promising candidates or freed for exploring new classifier candidates. We propose such a practical and principled predictive method. Practical because it avoids the costly procedure of training poor classifiers on the whole training set, and principled because of its theoretical foundation. The effectiveness of the proposed procedure is demonstrated for both single- and multi-layer networks. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Feature Curves and the Curse of Dimensionality <s> The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques and methods imported from statistical learning theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field. <s> BIB002
Measuring more and more features on every object seems to imply that we gather more and more useful information on them. The worst that can happen is that we measure features that are partly or completely redundant, e.g. measuring the density, while we already have measured the mass and the volume. But once we have the information present in the features, it cannot vanish anymore. In a sense this is indeed true, but the question is whether we can still extract the information relevant with growing feature numbers. All classifiers rely on some form of estimation, which is basically what we do when we train a classifier, but estimation typically becomes less and less reliable when the space in which we carry it out grows BIB001 . The net result of this is that, while we typically would get improved performance with every additional feature in the very beginning, this effect will gradually wear off, and in the long run even leads to a deterioration in performance as soon as the estimates become unreliable enough. This behavior is what is often referred to as the curse of dimensionality BIB002 . A curve that plots the performance of a classifier against an increasing number of features is called a feature curve . It can be used as a simple analytic tool to get an idea of how sensitive our classifier is to the number of measurements that each object is described with. Possibly of equal importance is that such curves can be used to compare two or more classifiers with each other. The forms feature curves take on depends heavily on the specific problem that we are dealing with, on the complexity of the classification method, the way this complexity relates to the specific problem, and on the number N of training samples we have to train our classifier. As far as it is at all possible, the exact mathematical quantification of these quantities is a real challenge. Very roughly, one can state that the more complex a classifier is, the quicker its performance starts deteriorating with an increasing number of features. On the other hand: the more training data that is available, the later the deterioration in performance sets in. Also, the one classification technique is more complex than the other if the possible decision boundaries the former can model are more flexible or, similarly, less smooth. Another way to think about this is that the hypothesis of the former classification method is larger than the latter one 21 .
Supervised Classification: Quite a Brief Overview <s> Feature Extraction and Selection <s> A large number of algorithms have been proposed for feature subset selection. Our experimental results show that the sequential forward floating selection algorithm, proposed by Pudil et al. (1994), dominates the other algorithms tested. We study the problem of choosing an optimal feature set for land use classification based on SAR satellite images using four different texture models. Pooling features derived from different texture models, followed by a feature selection results in a substantial improvement in the classification accuracy. We also illustrate the dangers of using feature selection in small sample size situations. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Feature Extraction and Selection <s> Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Feature Extraction and Selection <s> A conventional way to discriminate between objects represented by dissimilarities is the nearest neighbor method. A more efficient and sometimes a more accurate solution is offered by other dissimilarity-based classifiers. They construct a decision rule based on the entire training set, but they need just a small set of prototypes, the so-called representation set, as a reference for classifying new objects. Such alternative approaches may be especially advantageous for non-Euclidean or even non-metric dissimilarities. The choice of a proper representation set for dissimilarity-based classifiers is not yet fully investigated. It appears that a random selection may work well. In this paper, a number of experiments has been conducted on various metric and non-metric dissimilarity representations and prototype selection methods. Several procedures, like traditional feature selection methods (here effectively searching for prototypes), mode seeking and linear programming are compared to the random selection. In general, we find out that systematic approaches lead to better results than the random selection, especially for a small number of prototypes. Although there is no single winner as it depends on data characteristics, the k-centres works well, in general. For two-class problems, an important observation is that our dissimilarity-based discrimination functions relying on significantly reduced prototype sets (3-10% of the training objects) offer a similar or much better classification accuracy than the best k-NN rule on the entire training set. This may be reached for multi-class data as well, however such problems are more difficult. <s> BIB003
The curse of dimensionality indicates that in particular cases it can be beneficial for the performance of our classifier to lower the feature dimensionality. This may be applicable, for instance, if one has little insight in the classification problem at hand, in which case one tends to define lots of potentially useful features and/or dissimilarities in the hope that at least some of them pick up what is important to discriminate between the two classes. Carrying out a more or less systematic reduction of the dimensionality after defining such large class of features can lead to acceptable classification results. Roughly speaking, there are two main approaches BIB002 BIB001 . The first one is feature selection and the second one is feature extraction. The former reduces the dimensionality by picking a subset from the original feature set, while the latter allows the combination of two or more features into fewer new features. This combination is often restricted to linear transformations, i.e., weighted sums, of original features, meaning that one considers linear subspaces of the original feature space. In principle, however, feature extraction also encompasses nonlinear dimensionality reductions. Feature selection is, by construction, linear, where the possible subspace is even further limited to linear spaces that are parallel to the feature space axes. Lowering the feature dimensionality by feature selection can also aid in interpreting classification results. At least it can shed some light on what features seem to matter mostly and possibly we can gain some insight into their interdependencies, for instance, by studying the coefficients of a trained linear classifier. Aiming for a more interpretable classifier, we might even sacrifice some of the performance for the sake of a really limited feature set size. Feature selection can also be used to select the right prototypes when employing the dissimilarity approach BIB003 , as in this case every feature basically corresponds to all distances to a particular prototype.
Supervised Classification: Quite a Brief Overview <s> Apparent Error and Holdout Set <s> The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points. This rule is independent of the underlying joint distribution on the sample points and their classifications, and hence the probability of error R of such a rule must be at least as great as the Bayes probability of error R^{\ast} --the minimum probability of error over all decision rules taking underlying probability structure into account. However, in a large sample analysis, we will show in the M -category case that R^{\ast} \leq R \leq R^{\ast}(2 --MR^{\ast}/(M-1)) , where these bounds are the tightest possible, for all suitably smooth underlying distributions. Thus for any number of categories, the probability of error of the nearest neighbor rule is bounded above by twice the Bayes probability of error. In this sense, it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Apparent Error and Holdout Set <s> Preface * Introduction * The Bayes Error * Inequalities and alternatedistance measures * Linear discrimination * Nearest neighbor rules *Consistency * Slow rates of convergence Error estimation * The regularhistogram rule * Kernel rules Consistency of the k-nearest neighborrule * Vapnik-Chervonenkis theory * Combinatorial aspects of Vapnik-Chervonenkis theory * Lower bounds for empirical classifier selection* The maximum likelihood principle * Parametric classification *Generalized linear discrimination * Complexity regularization *Condensed and edited nearest neighbor rules * Tree classifiers * Data-dependent partitioning * Splitting the data * The resubstitutionestimate * Deleted estimates of the error probability * Automatickernel rules * Automatic nearest neighbor rules * Hypercubes anddiscrete spaces * Epsilon entropy and totally bounded sets * Uniformlaws of large numbers * Neural networks * Other error estimates *Feature extraction * Appendix * Notation * References * Index <s> BIB002
A major mistake, which is still being made among users and practitioners of pattern recognition and machine learning, is that one simply uses all available samples to build a classifier and then estimates the error on these same N samples. This estimate is called the resubstitution or apparent error, denoted ε A . The problem with this approach is that one gets an overly optimistic estimate. The classifier has been adapting to these specific points with these specific labels and therefore performs particularly well on this set. To more faithfully estimate the actual generalization performance of a classifier one would need a training set to train the classifier and a completely independent so-called test set BIB001 to estimate its performance. The latter is also referred to as the holdout set. In reality, we often have only a single set at our disposal, in which case we can construct a training and a holdout set by splitting the initial set in two. But how do we decide on the sizes of these two sets? We are dealing with two conflicting goals here. We would like to train on as much data as possible, as this would typically give us the best performing classifier BIB002 . So the final classifier we would deliver, say, to a client, would be trained on the full initial set available. But to get an idea of the true performance of this classifier-a possible selling point if low, we at least need some independent samples. The smaller we take this set, however, the less trustworthy this estimate will be. In the extreme case of one test sample, for instance, the estimate for error rate will always be equal to 0 or 1. But adding data to the test set will reduce the amount of data in the training set, which removes us further from the setting in which we train our final classifier on the full set. The following approaches, relying on resampling the training data, provide a resolution.
Supervised Classification: Quite a Brief Overview <s> Leave-one-out and k-Fold Cross Validation <s> Several methods of estimating error rates in Discriminant Analysis are evaluated by sampling methods. Multivariate normal samples are generated on a computer which have various true probabilities of misclassification for different combinations of sample sizes and different numbers of parameters. The two methods in most common use are found to be significantly poorer than some new methods that are proposed. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Leave-one-out and k-Fold Cross Validation <s> The Jackknife Estimate of Bias The Jackknife Estimate of Variance Bias of the Jackknife Variance Estimate The Bootstrap The Infinitesimal Jackknife The Delta Method and the Influence Function Cross-Validation, Jackknife and Bootstrap Balanced Repeated Replications (Half-Sampling) Random Subsampling Nonparametric Confidence Intervals. <s> BIB002
Cross validation is an evaluation technique that offers the best of both worlds and allows us to both train and test on large data sets. Moreover, when it comes to estimation accuracy, so-called leave-one-out cross validation is probably one of the best options we have. The latter approach loops through the whole data set for all i ∈ {1, . . . , N}. At step i the pair (x i , y i ) is used to evaluate the classifier that has been trained on all examples from the full set, except for that single sample (x i , y i ). So we have a training set of size N − 1 and a test set size of 1. Given that we want at least some data to test on, this is the best training set size we can have. The test set size is almost the worst we can have, but this is just for this single step in our loop. Going through all data available, every single sample will at some point act as a test set, giving us an estimated error rates ε i (all of value 0 or 1), which we can subsequently average to get to a better overall estimate This procedure is called leave-one-out cross validation and its resulting estimate the leaveone-out estimate BIB002 BIB001 . For computational reasons, e.g. when dealing with rather large data sets or classifiers that take long to train, one can consider to settle for so-called k-fold cross validation instead of its leave-one-out variant. In that case, the original data set is split in k, preferably, equal sized sets or k folds. After this, the procedure is basically the same as with leave-one-out: we loop over the k folds, which we consecutively leave out during training and which we then test on. Leave-one-out is then the same as N-fold cross validation.
Supervised Classification: Quite a Brief Overview <s> Bootstrap Estimators <s> Abstract We construct a prediction rule on the basis of some data, and then wish to estimate the error rate of this rule in classifying future observations. Cross-validation provides a nearly unbiased estimate, using only the original data. Cross-validation turns out to be related closely to the bootstrap estimate of the error rate. This article has two purposes: to understand better the theoretical basis of the prediction problem, and to investigate some related estimators, which seem to offer considerably improved estimation in small samples. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Bootstrap Estimators <s> The Jackknife Estimate of Bias The Jackknife Estimate of Variance Bias of the Jackknife Variance Estimate The Bootstrap The Infinitesimal Jackknife The Delta Method and the Influence Function Cross-Validation, Jackknife and Bootstrap Balanced Repeated Replications (Half-Sampling) Random Subsampling Nonparametric Confidence Intervals. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Bootstrap Estimators <s> Abstract A training set of data has been used to construct a rule for predicting future responses. What is the error rate of this rule? This is an important question both for comparing models and for assessing a final selected model. The traditional answer to this question is given by cross-validation. The cross-validation estimate of prediction error is nearly unbiased but can be highly variable. Here we discuss bootstrap estimates of prediction error, which can be thought of as smoothed versions of cross-validation. We show that a particular bootstrap method, the .632+ rule, substantially outperforms cross-validation in a catalog of 24 simulation experiments. Besides providing point estimates, we also consider estimating the variability of an error rate estimate. All of the results here are nonparametric and apply to any possible prediction rule; however, we study only classification problems with 0–1 loss in detail. Our simulations include “smooth” prediction rules like Fisher's linear discriminant fun... <s> BIB003
Bootstrapping is a common resampling technique in statistics, the basic version of which samples from the observed empirical distribution with replacement. Various bootstrap estimators of the error rate aim to correct the bias-that is, the overoptimism, in the apparent error. One of the more simple approaches proceeds as follows BIB002 BIB001 . From our data set of N training samples, we generate M bootstrap samples of size N and calculate the M corresponding apparent error rates ε A i for our particular choice of classifier. Using every time that same classifier, we also calculate the error ε T i rate on the total data set. An estimate of the bias is now given by their averaged difference. The bias corrected version of the apparent error, and as such an improved estimate of the true error, is now given by ε A − β. Various improvements upon and alternatives to this scheme have been suggested and studied BIB002 BIB001 BIB003 . Possibly the best-known is the .632 estimator ε .632 = 0.368ε With the first term on the right-hand side the apparent error and the second term the outof-bootstrap error. The latter is determined by counting all the samples from the original data set that are misclassified and that are not part of the current bootstrap sample based on which the classifier is built. Adding all these mistakes over all M rounds and dividing this number by the total number of out-of-bootstrap samples, gives us ε O .
Supervised Classification: Quite a Brief Overview <s> Learning Curves and the Single Best Classifier <s> Training classifiers on large databases is computationally demanding. It is desirable to develop efficient procedures for a reliable prediction of a classifier's suitability for implementing a given task, so that resources can be assigned to the most promising candidates or freed for exploring new classifier candidates. We propose such a practical and principled predictive method. Practical because it avoids the costly procedure of training poor classifiers on the whole training set, and principled because of its theoretical foundation. The effectiveness of the proposed procedure is demonstrated for both single- and multi-layer networks. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Learning Curves and the Single Best Classifier <s> Abstract A statistical decision rule among M classes is presented when the a priori knowledge about classes is not complete: either the number of classes is not the true one, or it is not possible to obtain samples from all the possible classes. The reject option proposed by Chow is extended by defining an ambiguity reject option and a distance reject option. These two types of reject can be defined in a parametric as well as in a non-parametric way. An example is given in R in order to illustrate this rule. This method has been developed essentially to solve diagnostic problems. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Learning Curves and the Single Best Classifier <s> The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89% correct predictions on a musk odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms. <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> Learning Curves and the Single Best Classifier <s> This article reviews five approximate statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. These test sare compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type I error). Two widely used statistical tests are shown to have high probability of type I error in certain situations and should never be used: a test for the difference of two proportions and a paired-differences t test based on taking several random train-test splits. A third test, a paired-differences t test based on 10-fold cross-validation, exhibits somewhat elevated probability of type I error. A fourth test, McNemar's test, is shown to have low type I error. The fifth test is a new test, 5 × 2 cv, based on five iterations of twofold cross-validation. Experiments show that this test also has acceptable type I error. The article also measures the power (ability to detect algorithm differences when they do exist)... <s> BIB004
In Subsection 3.3, we briefly introduced feature curves, which give us an impression of how the error rate evolves with an increasing numbers of features. We discussed the curse of dimensionality in this context. It should be clear by now that also these feature curves can, like the error rate, only be estimated and to do so, one would typically apply the estimation techniques described in the foregoing. Another, maybe more important curve that provides us with insight into the behavior of a classification method is the so-called learning curve BIB001 . The learning curve plots the (estimated) true error rate against number of training samples 0 error rate the number of training samples. To complete the picture, one typically also plots the apparent error in the same figure. Figure 2 displays stylized learning curves for two classifiers of different complexity. There are various characteristics of interest that we can observe in these plots and that reflect the typical behavior for many a classifier on many classification problems. To start with, with growing sample size, the classifier is expected to perform better in terms of the error rate BIB004 . In addition, for the apparent error we would typically observe the opposite behavior: the curve increases as it becomes more and more difficult to solve the classification problem for the growing training set BIB003 . In the limit of an infinite amount of data points, both curves come together BIB002 : the more training data one has, the better it describes the general data that we may encounter at test time and the closer to each other true error and apparent error get. In fact, the gap that we see is an indication that the trained classifier focuses too much on specifics that are in the training but not in the test set. This is called overtraining or overfitting. The larger the gap between true and apparent error is, the more overtraining has occurred. From the way that both learning curves for one classifier come together, one can also glean some insight. Classifiers that are less complex typically drop off more quickly, but also level out earlier than more complex ones. In addition, the former converges to an error rate that is most often above the limiting error rate of the latter: given enough data, one can get closer to the Bayes error when employing a more complex classifier 27 . As a result, it often is the case that one classifier is not uniformly better than another, even if we consider the same classification problem. It really matters what training set size we are dealing with and, when benchmarking the one classification method against the other, this should really be taken into account. Generally, the smaller the training data set is, the better it is to stick with simple classifiers, e.g. using a linear hypothesis class and few features. The fact that the best choice of classifier may depend not only on the type of classification problem we need to tackle, but also on the number of training samples that we have at our disposal, may lead one to wonder what generally can be said about the superiority of one classifier over the other. Wolpert (see also [27] ) made this question mathematically precise and demonstrated that for this and several variations of this question the answer is that, maybe surprisingly, there does not exist such distinctions between learning algorithms. This so-called no free lunch theorem states, very roughly, that averaged over all classification problems possible, there is no one classification method that outperforms any other. Though the result is certainly gripping, one should interpret it with some care. It should be realized, for instance, that among all possible classification problems that one can construct, there probably are many that do not reflect any kind of realistic setting. What we can say, nevertheless, is that generally there is no single best classifier. Finally, a learning curve may give us an idea of whether gathering more training data may improve the performance. In Figure 2 , the classifier corresponding to the black curves can hardly be improved, even if we add enormous amounts of additional data. The other classifier, the gray curves, can probably improve a bit, reaching a slightly lower error rate when enlarging the traning set.
Supervised Classification: Quite a Brief Overview <s> Some Words about More Realistic Scenarios <s> Motivation: Microarray classification typically possesses two striking attributes: (1) classifier design and error estimation are based on remarkably small samples and (2) cross-validation error estimation is employed in the majority of the papers. Thus, it is necessary to have a quantifiable understanding of the behavior of cross-validation in the context of very small samples. ::: ::: Results: An extensive simulation study has been performed comparing cross-validation, resubstitution and bootstrap estimation for three popular classification rules---linear discriminant analysis, 3-nearest-neighbor and decision trees (CART)---using both synthetic and real breast-cancer patient data. Comparison is via the distribution of differences between the estimated and true errors. Various statistics for the deviation distribution have been computed: mean (for estimator bias), variance (for estimator precision), root-mean square error (for composition of bias and variance) and quartile ranges, including outlier behavior. In general, while cross-validation error estimation is much less biased than resubstitution, it displays excessive variance, which makes individual estimates unreliable for small samples. Bootstrap methods provide improved performance relative to variance, but at a high computational cost and often with increased bias (albeit, much less than with resubstitution). ::: ::: Availability and Supplementary information: A companion web site can be accessed at the URL http://ee.tamu.edu/~edward/cv_paper. The companion web site contains: (1) the complete set of tables and plots regarding the simulation study; (2) additional figures; (3) a compilation of references for microarray classification studies and (4) the source code used, with full documentation and examples. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Some Words about More Realistic Scenarios <s> If we lack relevant problem-specific knowledge, cross-validation methods may be used to select a classification method empirically. We examine this idea here to show in what senses cross-validation does and does not solve the selection problem. As illustrated empirically, cross-validation may lead to higher average performance than application of any single classification strategy, and it also cuts the risk of poor performance. On the other hand, cross-validation is no more or less a form of bias than simpler strategies, and applying it appropriately ultimately depends in the same way on prior knowledge. In fact, cross-validation may be seen as a way of applying partial information about the applicability of alternative classification strategies. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Some Words about More Realistic Scenarios <s> We know there is a lot of lack of replication in research findings, most notably in the field of genetic associations [1–3]. For example, a survey of 600 positive associations between gene variants and common diseases showed that out of 166 reported associations studied three or more times, only six were replicated consistently [4]. Lack of replication results from a number of factors such as publication bias, selection bias, Type I errors, population stratification (the mixture of individuals from heterogeneous genetic backgrounds), and lack of statistical power [5]. <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> Some Words about More Realistic Scenarios <s> The interest in statistical classification for critical applications such as diagnoses of patient samples based on supervised learning is rapidly growing. To gain acceptance in applications where the subsequent decisions have serious consequences, e.g. choice of cancer therapy, any such decision support system must come with a reliable performance estimate. Tailored for small sample problems, cross-validation (CV) and bootstrapping (BTS) have been the most commonly used methods to determine such estimates in virtually all branches of science for the last 20 years. Here, we address the often overlooked fact that the uncertainty in a point estimate obtained with CV and BTS is unknown and quite large for small sample classification problems encountered in biomedical applications and elsewhere. To avoid this fundamental problem of employing CV and BTS, until improved alternatives have been established, we suggest that the final classification performance always should be reported in the form of a Bayesian confidence interval obtained from a simple holdout test or using some other method that yields conservative measures of the uncertainty. <s> BIB004 </s> Supervised Classification: Quite a Brief Overview <s> Some Words about More Realistic Scenarios <s> There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research. <s> BIB005 </s> Supervised Classification: Quite a Brief Overview <s> Some Words about More Realistic Scenarios <s> We thought it would be helpful to our authors to summarize our suggestions for avoiding a number of the most common errors and problem issues in statistical analysis that we encounter in submissions to Radiology. © RSNA, 2009 <s> BIB006 </s> Supervised Classification: Quite a Brief Overview <s> Some Words about More Realistic Scenarios <s> Concerns that the growing competition for funding and citations might distort science are frequently discussed, but have not been verified directly. Of the hypothesized problems, perhaps the most worrying is a worsening of positive-outcome bias. A system that disfavours negative results not only distorts the scientific literature directly, but might also discourage high-risk projects and pressure scientists to fabricate and falsify their data. This study analysed over 4,600 papers published in all disciplines between 1990 and 2007, measuring the frequency of papers that, having declared to have "tested" a hypothesis, reported a positive support for it. The overall frequency of positive supports has grown by over 22% between 1990 and 2007, with significant differences between disciplines and countries. The increase was stronger in the social and some biomedical disciplines. The United States had published, over the years, significantly fewer positive results than Asian countries (and particularly Japan) but more than European countries (and in particular the United Kingdom). Methodological artefacts cannot explain away these patterns, which support the hypotheses that research is becoming less pioneering and/or that the objectivity with which results are produced and published is decreasing. <s> BIB007 </s> Supervised Classification: Quite a Brief Overview <s> Some Words about More Realistic Scenarios <s> Overfitting is the bane of data analysts, even when data are plentiful. Formal approaches to understanding this problem focus on statistical inference and generalization of individual analysis procedures. Yet the practice of data analysis is an inherently interactive and adaptive process: new analyses and hypotheses are proposed after seeing the results of previous ones, parameters are tuned on the basis of obtained results, and datasets are shared and reused. An investigation of this gap has recently been initiated by the authors in [7], where we focused on the problem of estimating expectations of adaptively chosen functions. ::: ::: In this paper, we give a simple and practical method for reusing a holdout (or testing) set to validate the accuracy of hypotheses produced by a learning algorithm operating on a training set. Reusing a holdout set adaptively multiple times can easily lead to overfitting to the holdout set itself. We give an algorithm that enables the validation of a large number of adaptively chosen hypotheses, while provably avoiding overfitting. We illustrate the advantages of our algorithm over the standard use of the holdout set via a simple synthetic experiment. ::: ::: We also formalize and address the general problem of data reuse in adaptive data analysis. We show how the differential-privacy based approach given in [7] is applicable much more broadly to adaptive data analysis. We then show that a simple approach based on description length can also be used to give guarantees of statistical validity in adaptive settings. Finally, we demonstrate that these incomparable approaches can be unified via the notion of approximate max-information that we introduce. This, in particular, allows the preservation of statistical validity guarantees even when an analyst adaptively composes algorithms which have guarantees based on either of the two approaches. <s> BIB008 </s> Supervised Classification: Quite a Brief Overview <s> Some Words about More Realistic Scenarios <s> Misapplication of statistical data analysis is a common cause of spurious discoveries in scientific research. Existing approaches to ensuring the validity of inferences drawn from data assume a fixed procedure to be performed, selected before the data are examined. In common practice, however, data analysis is an intrinsically adaptive process, with new analyses generated on the basis of data exploration, as well as the results of previous analyses on the same data. We demonstrate a new approach for addressing the challenges of adaptivity based on insights from privacy-preserving data analysis. As an application, we show how to safely reuse a holdout data set many times to validate the results of adaptively chosen analyses. <s> BIB009 </s> Supervised Classification: Quite a Brief Overview <s> Some Words about More Realistic Scenarios <s> Ridding science of shoddy statistics will require scrutiny of every step, not merely the last one, say Jeffrey T. Leek and Roger D. Peng. <s> BIB010 </s> Supervised Classification: Quite a Brief Overview <s> Some Words about More Realistic Scenarios <s> In the process of scientific inquiry, certain claims accumulate enough support to be established as facts. Unfortunately, not every claim accorded the status of fact turns out to be true. In this paper, we model the dynamic process by which claims are canonized as fact through repeated experimental confirmation. The community's confidence in a claim constitutes a Markov process: each successive published result shifts the degree of belief, until sufficient evidence accumulates to accept the claim as fact or to reject it as false. In our model, publication bias --- in which positive results are published preferentially over negative ones --- influences the distribution of published results. We find that when readers do not know the degree of publication bias and thus cannot condition on it, false claims often can be canonized as facts. Unless a sufficient fraction of negative results are published, the scientific process will do a poor job at discriminating false from true claims. This problem is exacerbated when scientists engage in p-hacking, data dredging, and other behaviors that increase the rate at which false positives are published. If negative results become easier to publish as a claim approaches acceptance as a fact, however, true and false claims can be more readily distinguished. To the degree that the model accurately represents current scholarly practice, there will be serious concern about the validity of purported facts in some areas of scientific research. <s> BIB011
In real-world applications, designing and building a full classifier system will often be a process in which one may consider many feature representations, in which one will try various feature reduction schemes, and in which one will compare many different types of classifiers. On top of all that, there might be all kinds of preprocessing steps that are applied to the data (and that are not explicitly covered in this chapter). Working with images or signals for instance, one can perform various types of enhancement, smoothing, and normalization techniques that may have positive or negative effect on the performance of our final classifier. A real problem in all this is that it is difficult to dispose of a truly independent test set. Unless one has a massive amount of labeled training data, one easily gets into the situation that data that is also going to be used for evaluation leaks into the training phase. The estimated test errors are therefore overly optimistic and more so for complex classifiers than for simple ones. In the end, the result of this is that we may end up with a wrongly trained classifier, together with an overly optimistic estimate of its performance. Let us consider some examples where things go wrong. • A very simple instance is where one has decided, at some point, to use the k nearest neighbor classifier. The only thing that remains to be done is finding the best value for k and one decides to determine it on the basis of the performance for every k on the test set. It may seem like a minor offense, but often there are many of such choices: the best number of features, the number of nodes in a layer of a neural network, the free parameters in some of the kernels, etc. (cf. BIB006 and, in particular point 7 in the list). • Here is an example where it is maybe more difficult to see that one may have gone wrong. We decide to set up everything in a seemingly clean way. We prefix all classifiers that we want to study, all the feature selection schemes that we want to try, decide beforehand on all the kernels we want to consider, and all classifier combining schemes that we may want to employ. This gives a finite number of different classification schemes that we then compare based on cross validation. In the end, we then pick the scheme that provides the best performance. Even though this approach is actually fairly standard, again something does go wrong here. If the number of different classification schemes that we try out in this way gets out of hand, and it easily does, we still run the risk that we pick an overtrained solution with a badly biased estimate for its true error rate, especially when dealing with small training sets (cf. BIB001 BIB004 BIB002 ). • Even more complicated issues arise when multiple groups work on a large and challenging classification task. Nowadays, there are various research initiatives in which labeled data is provided publicly by a third party on which researchers can work simultaneously and collaboratively, but also in competition with each other. The flow of information and, in particular, the possibly indirect leakage of test data becomes difficult to oversee, let alone that we can easily correct for it when providing error estimates and corresponding confidence intervals or the like. How does one, for instance, correct for the fact that one's own method is inspired by some of the results by another group one has read about in the research literature? Though some statistical approaches are available that can alleviate particular problems BIB008 BIB009 , it is safe to say that there currently is no generally applicable solution-if such at all exists. Now the above primarily pertains to evaluation. In real scenarios, we of course also have to worry about the reproducibility and replicability of our findings. Otherwise, what kind of science would this be? Clearly, these are all issues that in one way or the other also play a significant role in other areas of research. In general, it turns out, however, that it is difficult to control all of these aspects and that mistakes are made, mostly unwittingly but in some case possibly even knowingly. For some potential, more or less dramatic consequences, we refer to the following good reads: BIB007 BIB005 BIB010 BIB003 BIB011 .
Supervised Classification: Quite a Brief Overview <s> Regularization <s> The method of weighted cross-validation is applied to the problem of solving linear integral equations of the first kind with noisy data. Numerical results illustrating its efficacy are given for estimating derivatives and for solving Fujita’s equation. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Regularization <s> Abstract : The relationship between certain regularization methods for solving ill posed linear operator equations and ridge methods in regression problems is described. The regularization estimates we describe may be viewed as ridge estimates in a (reproducing kernel) Hilbert space H. When the solution is known a priori to be in some closed, convex set in H, for example, the set of nonnegative functions, or the set of monotone functions, then one can propose regularized estimates subject to side conditions such as nonnegativity, monotonicity, etc. Some applications in medicine and meteorology are described. We describe the method of generalized cross validation for choosing the smoothing (or ridge) parameter in the presence of a family of linear inequality constraints. Some successful numerical examples, solving ill posed convolution equations with noisy data, subject to nonnegativity constraints, are presented. The technique appears to be quite successful in adding information, doing nearly the optimal amount of smoothing, and resolving distinct peaks in the solution which have been blurred by the convolution operation. (Author) <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Regularization <s> Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals. <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> Regularization <s> SUMMARY We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described. <s> BIB004 </s> Supervised Classification: Quite a Brief Overview <s> Regularization <s> We had previously shown that regularization principles lead to approximation schemes that are equivalent to networks with one layer of hidden units, called regularization networks . In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known radial basis functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends radial basis functions (RBF) to hyper basis functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of projection pursuit regression, and several types of neural networks. We propose to use the term generalized regularization networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call generalized regularization networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are (1) radial basis functions that can be generalized to hyper basis functions, (2) some tensor product splines, and (3) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions, and several perceptron-like neural networks with one hidden layer. <s> BIB005 </s> Supervised Classification: Quite a Brief Overview <s> Regularization <s> We present a kernel-based framework for pattern recognition, regression estimation, function approximation, and multiple operator inversion. Adopting a regularization-theoretic framework, the above are formulated as constrained optimization problems. Previous approaches such as ridge regression, support vector methods, and regularization networks are included as special cases. We show connections between the cost function and some properties up to now believed to apply to support vector machines only. For appropriately chosen cost functions, the optimal solution of all the problems described above can be found by solving a simple quadratic programming problem. <s> BIB006 </s> Supervised Classification: Quite a Brief Overview <s> Regularization <s> A comprehensive look at learning and generalization theory. The statistical theory of learning and generalization concerns the problem of choosing desired functions on the basis of empirical data. Highly applicable to a variety of computer science and robotics fields, this book offers lucid coverage of the theory as a whole. Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more. <s> BIB007 </s> Supervised Classification: Quite a Brief Overview <s> Regularization <s> In multiple regression it is shown that parameter estimates based on minimum residual sum of squares have a high probability of being unsatisfactory, if not incorrect, if the prediction vectors are not orthogonal. Proposed is an estimation procedure based on adding small positive quantities to the diagonal of X′X. Introduced is the ridge trace, a method for showing in two dimensions the effects of nonorthogonality. It is then shown how to augment X′X to obtain biased estimates with smaller mean square error. <s> BIB008 </s> Supervised Classification: Quite a Brief Overview <s> Regularization <s> Abstract Learning is key to developing systems tailored to a broad range of data analysis and information extraction tasks. We outline the mathematical foundations of learning theory and describe a key algorithm of it. <s> BIB009
Regularization is actually a rather important yet relatively advanced topic in supervised learning BIB009 BIB007 BIB002 and unfortunately we are going to be fairly brief about it here. The main idea of regularization is to have a means of performing complexity control. As we have seen already, classifier complexity can be controlled by the number of features that are used or through the complexity of the hypothesis class and, in a way, regularization is related to both of these. One of the well-known ways of regularizing a linear classifier is by constraining the already limited hypothesis space further. This is typically done by restricting the admissible weights w of the linear classifier to a sphere with radius t > 0 around the origin of the hypothesis space, which means we solve the constraint optimization problem A formulation that is essentially equivalent is constructed by including the constraint directly into the objective function: where λ > 0 is known as the regularization parameter. The regularization is stronger with larger λ. This procedure is the same as the one used in classical ridge regression BIB008 and effectively stabilizes the solution that is obtained. The effect of regularization is that the bias of our classification method increases, as we cannot reach certain linear classifiers anymore due to the added constraint. At the same time, the variance in our classifier estimates decreases due to the constraint (which is another way of saying that the classifier becomes more stable). In the average, with a small to moderate parameter λ, the worsening in performance we may get because of the increased bias is amply compensated with an improvement in performance due to the reduced variance, in which case regularization will lead to an improved classifier. If, however, we regularize too strongly, the bias will start to dominate and pull our model too far away from any reasonable solution at which point the true error rate will start to increase again. A basic explanation of the effects of this so-called bias-variance tradeoff can already be found in the earlier mentioned work of Hoerl and Kennard BIB008 . The phenomenon can be seen in various guises and its importance has been acknowledged early on in statistics and data analysis BIB001 BIB002 . A more explicit dissection of the bias-variance tradeoff, in the context of learning methods, was published in BIB003 . The more complex a classifier is, the higher the variance we are faced with when training such model, and the more important some form of regularization becomes. Equations (33) and (34) only consider the most basic form of regularization. There are many more variations on this theme. Among others, there are regularizers with built-in feature selectors BIB004 and regularizers that have deep connections to our earlier discussed kernels BIB005 BIB006 .
Supervised Classification: Quite a Brief Overview <s> Multiple Instance Learning <s> Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Instance Learning <s> The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89% correct predictions on a musk odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Instance Learning <s> Multiple instance learning (MIL) is concerned with learning from sets (bags) of objects (instances), where the individual instance labels are ambiguous. In this setting, supervised learning cannot be applied directly. Often, specialized MIL methods learn by making additional assumptions about the relationship of the bag labels and instance labels. Such assumptions may fit a particular dataset, but do not generalize to the whole range of MIL problems. Other MIL methods shift the focus of assumptions from the labels to the overall (dis)similarity of bags, and therefore learn from bags directly. We propose to represent each bag by a vector of its dissimilarities to other bags in the training set, and treat these dissimilarities as a feature representation. We show several alternatives to define a dissimilarity between bags and discuss which definitions are more suitable for particular MIL problems. The experimental results show that the proposed approach is computationally inexpensive, yet very competitive with state-of-the-art algorithms on a wide range of MIL datasets. <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Instance Learning <s> In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the assumption that instances are drawn from a mixture distribution of the concept and the non-concept, which leads to a convenient way to solve MIL as a classifier combining problem. It is shown that instances can be classified with any standard supervised classifier by re-weighting the classification posteriors. Given the instance labels, the label of a bag can be obtained as a classifier combining problem. An optimal decision rule is derived that determines the threshold on the fraction of instances in a bag that is assigned to the concept class. We provide estimators for the two parameters in the model. The method is tested on a toy data set and various benchmark data sets, and shown to provide results comparable to state-of-the-art MIL methods. <s> BIB004 </s> Supervised Classification: Quite a Brief Overview <s> Multiple Instance Learning <s> Abstract Multiple instance learning (MIL) is a form of weakly supervised learning where training instances are arranged in sets, called bags, and a label is provided for the entire bag. This formulation is gaining interest because it naturally fits various problems and allows to leverage weakly labeled data. Consequently, it has been used in diverse application fields such as computer vision and document classification. However, learning from bags raises important challenges that are unique to MIL. This paper provides a comprehensive survey of the characteristics which define and differentiate the types of MIL problems. Until now, these problem characteristics have not been formally identified and described. As a result, the variations in performance of MIL algorithms from one data set to another are difficult to explain. In this paper, MIL problem characteristics are grouped into four broad categories: the composition of the bags, the types of data distribution, the ambiguity of instance labels, and the task to be performed. Methods specialized to address each category are reviewed. Then, the extent to which these characteristics manifest themselves in key MIL application areas are described. Finally, experiments are conducted to compare the performance of 16 state-of-the-art MIL methods on selected problem characteristics. This paper provides insight on how the problem characteristics affect MIL algorithms, recommendations for future benchmarking and promising avenues for research. Code is available on-line at https://github.com/macarbonneau/MILSurvey . <s> BIB005
In particular settings, it is more appropriate or it simply is easier to describe every object o i , not with a single feature vector x i , but with a set of such feature vectors. This approach is, for example, common in various image analysis tasks, in which a set of so-called descriptors, i.e., feature vectors that capture the local image content at various locations in the image, act as the representation of that image. Every image, in both the training and the test set, is represented by such a set of descriptors and the goal is to construct a classifier for such sets. The research area that studies approaches applicable to this setting, in which every object can be described with sets of feature vectors having different sizes, but where the feature vectors are from the same measurement space, is called multiple instance learning. A large number of classification routines have been developed for this specific problem, which range from basic extensions of classifiers from the supervised classification domain by means of combining techniques, via dissimilarity-based approaches, to approaches specifically designed for the purpose of set classification BIB005 BIB003 BIB004 BIB001 . The classical reference, in which the initial problem has been formalized, is BIB002 .
Supervised Classification: Quite a Brief Overview <s> One-class Classification, Outliers, and Reject Options <s> The performance of a pattern recognition system is characterized by its error and reject tradeoff. This paper describes an optimum rejection rule and presents a general relation between the error and reject probabilities and some simple properties of the tradeoff in the optimum recognition system. The error rate can be directly evaluated from the reject function. Some practical implications of the results are discussed. Examples in normal distributions and uniform distributions are given. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> One-class Classification, Outliers, and Reject Options <s> Abstract A statistical decision rule among M classes is presented when the a priori knowledge about classes is not complete: either the number of classes is not the true one, or it is not possible to obtain samples from all the possible classes. The reject option proposed by Chow is extended by defining an ambiguity reject option and a distance reject option. These two types of reject can be defined in a parametric as well as in a non-parametric way. An example is given in R in order to illustrate this rule. This method has been developed essentially to solve diagnostic problems. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> One-class Classification, Outliers, and Reject Options <s> This paper shows the use of a data domain description method, inspired by the support vector machine by Vapnik, called the support vector domain description (SVDD). This data description can be used for novelty or outlier detection. A spherically shaped decision boundary around a set of objects is constructed by a set of support vectors describing the sphere boundary. It has the possibility of transforming the data to new feature spaces without much extra computational cost. By using the transformed data, this SVDD can obtain more flexible and more accurate data descriptions. The error of the first kind, the fraction of the training objects which will be rejected, can be estimated immediately from the description without the use of an independent test set, which makes this method data eAcient. The support vector domain description is compared with other outlier detection methods on real data. ” 1999 Elsevier Science B.V. All rights reserved. <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> One-class Classification, Outliers, and Reject Options <s> Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified value between 0 and 1. We propose a method to approach this problem by trying to estimate a function f that is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabeled data. <s> BIB004 </s> Supervised Classification: Quite a Brief Overview <s> One-class Classification, Outliers, and Reject Options <s> Novelty detection is the identification of new or unknown data or signal that a machine learning system is not aware of during training. Novelty detection is one of the fundamental requirements of a good classification or identification system since sometimes the test data contains information about objects that were not known at the time of training the model. In this paper we provide state-of-the-art review in the area of novelty detection based on statistical approaches. The second part paper details novelty detection using neural networks. As discussed, there are a multitude of applications where novelty detection is extremely important including signal processing, computer vision, pattern recognition, data mining, and robotics. <s> BIB005 </s> Supervised Classification: Quite a Brief Overview <s> One-class Classification, Outliers, and Reject Options <s> Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with. <s> BIB006
There are various problems where it is difficult to find sufficient examples of one of the classes, because they are very difficult to find or simply occur very seldom. In that case, one-class classification might be of use. Instead of trying to solve the two-class problem straightaway, it aims to model the distribution or support of the oft-occurring class accurately and based on that decides which points really do not belong to that class and, therefore, will be assigned to the class of which little is known BIB004 BIB003 . Such techniques have direct relations to approaches that perform outlier or novelty detection in data and data streams BIB006 BIB005 in which one aims to identify objects that are, in some sense, far away from the bulk of the data. The more a test data point is an outlier, the less training data will be present in its vicinity and, therefore, the less certain a classifier will be in assigning the corresponding object to one or the other class. Consequently, outlier detection and related techniques are also used to implement so-called reject options BIB001 . These aim to identify points for which, say, p X is small and any automated decision by the classifier at hand is probably unreliable. In such case, the ultimate decision may be better left to a human expert. We might, for instance, be dealing with a sample from a third class; something that our classifier never saw examples of. This kind of rejection is also referred to as the distance reject option BIB002 . A second option is ambiguity rejection, in which case the classifier rather looks at p Y |X and leaves the final decision to a human expert if (in the two-class case) the two posteriors are very close to each other, i.e., approximately 1 2 BIB002 . For ambiguity and distance rejection, one should realize that both an erroneous decision by the classifier and deploying a human expert come with their own costs. One of the main challenges in the use of a reject option is then to trade these two costs off in an optimal way.
Supervised Classification: Quite a Brief Overview <s> Contextual Classification <s> may 7th, 1986, Professor A. F. M. Smith in the Chair] SUMMARY A continuous two-dimensional region is partitioned into a fine rectangular array of sites or "pixels", each pixel having a particular "colour" belonging to a prescribed finite set. The true colouring of the region is unknown but, associated with each pixel, there is a possibly multivariate record which conveys imperfect information about its colour according to a known statistical model. The aim is to reconstruct the true scene, with the additional knowledge that pixels close together tend to have the same or similar colours. In this paper, it is assumed that the local characteristics of the true scene can be represented by a nondegenerate Markov random field. Such information can be combined with the records by Bayes' theorem and the true scene can be estimated according to standard criteria. However, the computational burden is enormous and the reconstruction may reflect undesirable largescale properties of the random field. Thus, a simple, iterative method of reconstruction is proposed, which does not depend on these large-scale characteristics. The method is illustrated by computer simulations in which the original scene is not directly related to the assumed random field. Some complications, including parameter estimation, are discussed. Potential applications are mentioned briefly. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Contextual Classification <s> We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Contextual Classification <s> This paper deals with the concept of information unification and its application to the contextual pattern recognition task. The concept of the recognition and the rule-based algorithm with learning, based on the probabilistic model is presented. The machine learning algorithm based on statistical tests for the recognition of controlled Markov chains is shown. Idea of information unification via transforming the expert rules into the learning set is derived. <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> Contextual Classification <s> We describe a new sequential learning scheme called "stacked sequential learning". Stacked sequential learning is a meta-learning algorithm, in which an arbitrary base learner is augmented so as to make it aware of the labels of nearby examples. We evaluate the method on several "sequential partitioning problems", which are characterized by long runs of identical labels. We demonstrate that on these problems, sequential stacking consistently improves the performance of nonsequential base learners; that sequential stacking often improves performance of learners (such as CRFs) that are designed specifically for sequential tasks; and that a sequentially stacked maximum-entropy learner generally outperforms CRFs. <s> BIB004
Contextual classification has already been mentioned in Subsection 2.7.3 on multiple classifier systems. In these contextual approaches, samples are not classified in isolation, but they may have various types of neighborhood relations that can be exploited to improve the overall performance. The classical approach to this employs Markov random fields BIB001 and specific variations to those techniques like conditional random fields BIB002 . The earlier mentioned methods using classifier combining techniques BIB004 BIB003 are often more easily applicable and can leverage the full potential of more general classification methodologies. As already indicated, in Subsection 2.7.3 as well, the latter class of techniques seems to become relevant again in the context of nowadays deep learning approaches.
Supervised Classification: Quite a Brief Overview <s> Missing Data and Semi-supervised Learning <s> A rather simple semi-supervised version of the equally simple nearest mean classifier is presented. However simple, the proposed approach is of practical interest as the nearest mean classifier remains a relevant tool in biomedical applications or other areas dealing with relatively high-dimensional feature spaces or small sample sizes. More importantly, the performance of our semi-supervised nearest mean classifier is typically expected to improve over that of its standard supervised counterpart and typically does not deteriorate with increasing numbers of unlabeled data. This behavior is achieved by constraining the parameters that are estimated to comply with relevant information in the unlabeled data, which leads, in expectation, to a more rapid convergence to the large-sample solution because the variance of the estimate is reduced. In a sense, our proposal demonstrates that it may be possible to properly train a known classification scheme such that it can benefit from unlabeled data, while avoiding the additional assumptions typically made in semi-supervised learning. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Missing Data and Semi-supervised Learning <s> It is usually expected that, when labeled data are limited, the learning performance can be improved by exploiting unlabeled data. In many cases, however, the performances of current semi-supervised learning approaches may be even worse than purely using the limited labeled data. It is desired to have safe semi-supervised learning approaches which never degenerate learning performance by using unlabeled data. In this paper, we focus on semi-supervised support vector machines (S3VMs) and propose S4VMs, i.e., safe S3VMs. Unlike S3VMs which typically aim at approaching an optimal low-density separator, S4VMs try to exploit the candidate low-density separators simultaneously to reduce the risk of identifying a poor separator with unlabeled data. We describe two implementations of S4VMs, and our comprehensive experiments show that the overall performance of S4VMs are highly competitive to S3VMs, while in contrast to S3VMs which degenerate performance in many cases, S4VMs are never significantly inferior to inductive SVMs. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Missing Data and Semi-supervised Learning <s> Improvement guarantees for semi-supervised classifiers can currently only be given under restrictive conditions on the data. We propose a general way to perform semi-supervised parameter estimation for likelihood-based classifiers for which, on the full training set, the estimates are never worse than the supervised solution in terms of the log-likelihood. We argue, moreover, that we may expect these solutions to really improve upon the supervised classifier in particular cases. In a worked-out example for LDA, we take it one step further and essentially prove that its semi-supervised version is strictly better than its supervised counterpart. The two new concepts that form the core of our estimation principle are contrast and pessimism. The former refers to the fact that our objective function takes the supervised estimates into account, enabling the semi-supervised solution to explicitly control the potential improvements over this estimate. The latter refers to the fact that our estimates are conservative and therefore resilient to whatever form the true labeling of the unlabeled data takes on. Experiments demonstrate the improvements in terms of both the log-likelihood and the classification error rate on independent test sets. <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> Missing Data and Semi-supervised Learning <s> For semi-supervised techniques to be applied safely in practice we at least want methods to outperform their supervised counterparts. We study this question for classification using the well-known quadratic surrogate loss function. Using a projection of the supervised estimate onto a set of constraints imposed by the unlabeled data, we find we can safely improve over the supervised solution in terms of this quadratic loss. Unlike other approaches to semi-supervised learning, the procedure does not rely on assumptions that are not intrinsic to the classifier at hand. It is theoretically demonstrated that, measured on the labeled and unlabeled training data, this semi-supervised procedure never gives a lower quadratic loss than the supervised alternative. To our knowledge this is the first approach that offers such strong, albeit conservative, guarantees for improvement over the supervised solution. The characteristics of our approach are explicated using benchmark datasets to further understand the similarities and differences between the quadratic loss criterion used in the theoretical results and the classification accuracy often considered in practice. <s> BIB004
In many real-world setting, missing data is a considerable and reoccurring problem. In the classification setting this means that particular features and/or class labels have not been observed. Missing features can occur because of the failure of a measurement apparatus, because of human non-response, or because the data was not recorded or got accidentally erased. There are various ways to deal with such deletions, which is a topic thoroughly studied in statistics . The case of missing labels can have additional causes. It may simply have been too expensive to label more data or additional input data has been collected afterwards to extend the already available data, but the collector is not a specialist that can provide the necessary annotation. The case of missing label data is known within pattern recognition and machine learning as semi-supervised learning . Also for this problem, which has been studied for over 50 years already, many different techniques have been developed. Though maybe more in a theoretical sense, there is still no completely satisfactory and practicable solution to the problem . One of the major issues is the question to what extent one can guarantee that a supervised classifier can indeed be improved by taking all unlabeled data into account as well BIB004 BIB002 BIB001 BIB003 .
Supervised Classification: Quite a Brief Overview <s> Active Learning <s> A ground detecting device for a vehicle, craft, or the like, having a storage tank mounted on the vehicle, or the like, equipment for selectively controlling the pumping of fluid into and out of the storage tank, comprises a transformer having first, second and third windings, the third winding being electrically connected to ground. A jack is electrically connected in shunt with the third winding. A prong is electrically connected to the vehicle, craft, or the like, and cooperates with the jack, when inserted therein, to short-circuit the third winding to ground. The short-circuit has a short-circuit impedance resulting in a voltage differential at the second winding. A comparator amplifier has a sensing input electrically connected to the second winding, a reference input, and an output. An astable multivibrator has an output electrically connected to the reference input of the comparator amplifier and to the first winding and supplies a reference input signal thereto. A relay has a relay energizing winding electrically connected to the output of the comparator amplifier and controls the equipment for selectively controlling the pumping of fluid into and out of the storage tank. When the vehicle, craft, or the like, is short-circuited, so that static electricity is dissipated from the vehicle, craft, or the like, the short-circuit is reflected at the primary winding of the transformer and produces a voltage drop between the reference input signal and the sensing input of the comparator amplifier. The voltage drop causes saturation of the amplifier resulting in energization of the relay energizing winding and operation of the equipment. <s> BIB001 </s> Supervised Classification: Quite a Brief Overview <s> Active Learning <s> For many types of machine learning algorithms, one can compute the statistically "optimal" way to select training data. In this paper, we review how optimal data selection techniques have been used with feedforward neural networks. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are computationally expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate. Empirically, we observe that the optimality criterion sharply decreases the number of training examples the learner needs in order to achieve good performance. <s> BIB002 </s> Supervised Classification: Quite a Brief Overview <s> Active Learning <s> We present a practical and statistically consistent scheme for actively learning binary classifiers under general loss functions. Our algorithm uses importance weighting to correct sampling bias, and by controlling the variance, we are able to give rigorous label complexity bounds for the learning process. Experiments on passively labeled data show that this approach reduces the label complexity required to achieve good predictive performance on many learning problems. <s> BIB003 </s> Supervised Classification: Quite a Brief Overview <s> Active Learning <s> Abstract Logistic regression is by far the most widely used classifier in real-world applications. In this paper, we benchmark the state-of-the-art active learning methods for logistic regression and discuss and illustrate their underlying characteristics. Experiments are carried out on three synthetic datasets and 44 real-world datasets, providing insight into the behaviors of these active learning methods with respect to the area of the learning curve (which plots classification accuracy as a function of the number of queried examples) and their computational costs. Surprisingly, one of the earliest and simplest suggested active learning methods, i.e., uncertainty sampling, performs exceptionally well overall. Another remarkable finding is that random sampling, which is the rudimentary baseline to improve upon, is not overwhelmed by individual active learning techniques in many cases. <s> BIB004 </s> Supervised Classification: Quite a Brief Overview <s> Active Learning <s> In active learning, one aims to acquire labeled samples that are particularly useful for training a classifier. In sequential active learning, this sample selection is done in a one-at-a-time manner where the choice of sample t + 1 may depend on the current state of the classifier and the t labeled data points already available. In their deviation from standard random sampling, current active learning schemes typically introduce severe sampling bias. Even though this fact has been acknowledged in the more theoretical contributions covering active learning, the more popular approaches largely ignore this bias. This work empirically investigates the consequences of their actions and sets out to identify the pros and cons of this way of dealing with the problem of active learning. Even though current techniques can provide excellent approaches to learning, we conclude that they provide inconsistent solutions and therefore, in a strict sense, do not solve the problem of active learning. <s> BIB005
The final variation on supervised classification is actually concerned with regular supervised classification. The difference, however, with the main setting discussed throughout this chapter is that active learning sets out to improve the data collection process. It tries to answer various related questions, one of which is as follows. Given that we have a large number of unlabeled samples and a budget to label N of these samples, what instances should we consider for labeling to enable us to train a better classifier than we would be able to in case we would rely on random sampling ? So can we in a more systematic way collect data to be labeled, such that we quicker come to a well-trained classifier? The problem formulation has direct relations to sequential analysis and optimal experimental design BIB001 . Overviews of current techniques can be found in BIB002 , , and BIB004 . One of the major issues in active learning is that the systematic collection of labeled training data typically leads to a systematic bias as well. Correcting for this seems essential BIB003 (see also BIB005 ). In a way, it points to a problem one will more generally encounter in practical settings and which directly relates to some of the issues indicated in Subsection 6.5: one of the key assumptions in supervised classification is that the training and test set consist of i.i.d. samples from the same underlying problem defined by the density p XY . In reality, this assumption is most probably violated and care should be taken.
Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Introduction <s> Despite the increasing usage of cloud computing, there are still issues unsolved due to inherent problems of cloud computing such as unreliable latency, lack of mobility support and location-awareness. Fog computing can address those problems by providing elastic resources and services to end users at the edge of network, while cloud computing are more about providing resources distributed in the core network. This survey discusses the definition of fog computing and similar concepts, introduces representative application scenarios, and identifies various aspects of issues we may encounter when designing and implementing fog computing systems. It also highlights some opportunities and challenges, as direction of potential future work, in related techniques that need to be considered in the context of fog computing. <s> BIB001 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Introduction <s> Quality of service (QoS) guarantee is an important component of service recommendation. Generally, some QoS values of a service are unknown to its users who has never invoked it before, and therefore the accurate prediction of unknown QoS values is significant for the successful deployment of web service-based applications. Collaborative filtering is an important method for predicting missing values, and has thus been widely adopted in the prediction of unknown QoS values. However, collaborative filtering originated from the processing of subjective data, such as movie scores. The QoS data of web services are usually objective, meaning that existing collaborative filtering-based approaches are not always applicable for unknown QoS values. Based on real world web service QoS data and a number of experiments, in this paper, we determine some important characteristics of objective QoS datasets that have never been found before. We propose a prediction algorithm to realize these characteristics, allowing the unknown QoS values to be predicted accurately. Experimental results show that the proposed algorithm predicts unknown web service QoS values more accurately than other existing approaches. <s> BIB002
With the rapid advance of SOA, there are greater numbers of self-contained, self-describing, loosely coupled, and modular component services in the Internet. To implement sophisticated business applications, one or more services are combined into value-added and coarse-grained service oriented system, that is, composite service. Nowadays, a growing number of enterprises employ composite services to shorten the software development cycle, reduce development costs, and ultimately implement their business processes BIB002 . However, faults are prone to happen during the execution of composite service. That is because a large proportion of component services are deployed in the best-effort and unreliable Internet, especially in the Mobile Fog Computing environment. Mobile Fog Computing is put forward to enable computing directly at the edge of the network, which can deliver new services for the future of the Internet. However, there are many resource-poor devices in the Mobile Fog Computing environment, for example, routers, switches, and base stations. Composite services are more prone to fault if component services are deployed on resource-poor devices BIB001 . Therefore, fault tolerant strategy has become a crucial necessity for building reliable composite service. In recent years, many scholars and organizations have engaged in fault tolerant strategies research and put forward various fault tolerant strategies. In this paper, an overview of key fault tolerant strategy for composite service is presented. We categorize the fault tolerant strategies according to the phase of their adoption. When fault tolerance strategy is employed in the design phase of composite service, it is referred to as a static fault tolerant strategy. When it is adopted during the execution phase, the strategy is referred to as a dynamic fault tolerant strategy . There are various implementation schemes for static and dynamic fault tolerance strategies, so an overview of main literature about them is presented in this paper. The rest of this paper is organized as follows. The next section presents the category of fault tolerance. Static fault tolerance strategies are analyzed in Section 3. Dynamic fault tolerance strategies are discussed in Section 4. Brief conclusion about the challenge of fault tolerance strategies is given in Section 5. The last section concludes the paper.
Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Static Fault Tolerance Strategies <s> Along with the standardization of Web services composition language and the widespread acceptance of composition technologies, Web services composition is becoming an efficient and cost-effective way to develop modern business applications. As Web services are inherently unreliable, how to deliver reliable Web services composition over unreliable Web services is a significant and challenging problem. In this paper, we propose FACTS, a framework for fault-tolerant composition of transactional Web services. We identify a set of high-level exception handling strategies and a new taxonomy of transactional Web services to devise a fault-tolerant mechanism that combines exception handling and transaction techniques. We also devise a specification module and a verification module to assist service designers to construct fault-handling logic conveniently and correctly. Furthermore, we design an implementation module to automatically implement fault-handling logic in WS-BPEL. A case study demonstrates the viability of our framework and experimental results show that FACTS can improve fault tolerance of composite services with acceptable overheads. <s> BIB001 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Static Fault Tolerance Strategies <s> Cloud computing is becoming a mainstream aspect of information technology. More and more enterprises deploy their software systems in the cloud environment. The cloud applications are usually large scale and include a lot of distributed cloud components. Building highly reliable cloud applications is a challenging and critical research problem. To attack this challenge, we propose a component ranking framework, named FTCloud, for building fault-tolerant cloud applications. FTCloud includes two ranking algorithms. The first algorithm employs component invocation structures and invocation frequencies for making significant component ranking. The second ranking algorithm systematically fuses the system structure information as well as the application designers' wisdom to identify the significant components in a cloud application. After the component ranking phase, an algorithm is proposed to automatically determine an optimal fault-tolerance strategy for the significant cloud components. The experimental results show that by tolerating faults of a small part of the most significant components, the reliability of cloud applications can be greatly improved. <s> BIB002 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Static Fault Tolerance Strategies <s> A great interest in vehicular ad-hoc networks has been noticed by the research community. General goals of vehicular networks are to enhance safety on the road and to ensure the convenience of passengers by continuously providing them, in real time, with information and entertainment options such as routes to destinations, traffic conditions, facilities' information, and multimedia/Internet access. Indeed, time efficient systems that have high connectivity and low bandwidth usage are most needed to cope with realistic traffic mobility conditions. One foundation of such a system is the design of an efficient gateway discovery protocol that guarantees robust connectivity between vehicles, while assuring Internet access. Little work has been performed on how to concurrently integrate load balancing, quality of service QoS, and fault tolerant mechanisms into these protocols. In this paper, we propose a reliable QoS-aware and location aided gateway discovery protocol for vehicular networks by the name of fault tolerant location-based gateway advertisement and discovery. One of the features of this protocol is its ability to tolerate gateway routers and/or road vehicle failure. Moreover, this protocol takes into consideration the aspects of the QoS requirements specified by the gateway requesters; furthermore, the protocol insures load balancing on the gateways as well as on the routes between gateways and gateway clients. We discuss its implementation and report on its performance in contrast with similar protocols through extensive simulation experiments using the ns-2 simulator. Copyright © 2013 John Wiley & Sons, Ltd. <s> BIB003 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Static Fault Tolerance Strategies <s> Byzantine fault is the fault that can make the components behave arbitrary and may cause disastrous results. With the increasing malicious attacks and software errors, Byzantine fault tolerance has begun to draw more attention it deserves. Previous Byzantine fault tolerant algorithms have strong assumption that all the replicas is synchronous and do not support replicated calling services, which make them not practical and not suit for new computing model such as SOA. This paper proposes a new Byzantine fault tolerant algorithm based on well-known Byzantine fault tolerant algorithm CLBFT (Castro Liskov Byzantine Fault Tolerance) for replicated services in the calling endpoint. The algorithm works in asynchronous environments and support replicated calling services. To make the algorithm more practical, we incorporates important optimization-Window mechanism, which can make the replica batch process the message that reduce the response time much more than previous algorithms. Besides non-faulty process of the algorithm, we provide the faulty handling process to make the algorithm more robust. <s> BIB004 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Static Fault Tolerance Strategies <s> We construct a Web service collaboration network.We propose a collaboration reputation concept.We present a trustworthy Web service selection method. Traditional trustworthy service selection approaches focus the overall reputation maximization of all selected services in social networks. However, the selected services barely interact with each other in history, which leads to the trustworthiness among services being very low. Hence, to enhance the trustworthiness of Web service selection, a novel concept, collaboration reputation is proposed in this paper. The collaboration reputation is built on a Web service collaboration network consisting of two metrics. One metric, invoking reputation, can be calculated according to other service's recommendation. The other metric, invoked reputation, can be assessed by the interaction frequency among Web services. Finally, based on the collaboration reputation, we present a trustworthy Web service selection method to not only solve the simple Web service selection but also the complex selection. Experimental results show that compared with other methods, the efficiency of our method and the solution's trustworthiness are both greatly increased. <s> BIB005 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Static Fault Tolerance Strategies <s> An increasing number of companies are beginning to deploy services/applications in the cloud computing environment. Enhancing the reliability of cloud service has become a critical and challenging research problem. In the cloud computing environment, all resources are commercialized. Therefore, a reliability enhancement approach should not consume too much resource. However, existing approaches cannot achieve the optimal effect because of checkpoint image-sharing neglect, and checkpoint image inaccessibility caused by node crashing. To address this problem, we propose a cloud service reliability enhancement approach for minimizing network and storage resource usage in a cloud data center. In our proposed approach, the identical parts of all virtual machines that provide the same service are checkpointed once as the service checkpoint image, which can be shared by those virtual machines to reduce the storage resource consumption. Then, the remaining checkpoint images only save the modified page. To persistently store the checkpoint image, the checkpoint image storage problem is modeled as an optimization problem. Finally, we present an efficient heuristic algorithm to solve the problem. The algorithm exploits the data center network architecture characteristics and the node failure predicator to minimize network resource usage. To verify the effectiveness of the proposed approach, we extend the renowned cloud simulator Cloudsim and conduct experiments on it. Experimental results based on the extended Cloudsim show that the proposed approach not only guarantees cloud service reliability, but also consumes fewer network and storage resources than other approaches. <s> BIB006 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Static Fault Tolerance Strategies <s> The service-oriented paradigm is emerging as a new approach to heterogeneous distributed software systems composed of services accessed locally or remotely by middleware technology. How to select the optimal composited service from a set of functionally equivalent services with different quality of service (QoS) attributes has become an active focus of research in the service community. However, existing middleware solutions or approaches are inefficient as they search all solution spaces. More importantly, they inherently neglect QoS uncertainty owing to the dynamic network environment. In this paper, based on a service composition middleware framework, we propose an efficient and reliable service selection approach that attempts to select the best reliable composited service by filtering low-reliability services through the computation of QoS uncertainty. The approach first employs information theory and probability theory to abandon high-QoS-uncertainty services and downsize the solution space. A reliability fitness function is then designed to select the best reliable service for composited services. We experimented with real-world and synthetic datasets and compared our approach with other approaches. Our results show that our approach is not only fast, but also finds more reliable composited services. Design a service composition middleware for heterogeneous distributed systems.An efficient and reliable service selection approach based on information and variance theory is proposed.Experiments with real-world dataset show that the proposed technique is superior to other existing approaches. <s> BIB007
To construct a reliable and trustworthy composite service, static fault tolerant strategies are adopted at the stage of design. The purpose of static fault tolerance strategy is to select reliable and trustworthy component service for composite service. Static fault tolerance strategies are usually carried out during the service selection phase BIB003 . There are various static fault tolerant strategies, for example, the highcertainty component selection BIB007 , high-trustworthiness Wireless Communications and Mobile Computing 3 component selection BIB005 , high-reliability component selection [8, BIB006 , fault tolerance based on exception handling and transaction techniques BIB001 , and component services ranking BIB002 . The above-mentioned strategies can only handle traditional fault of composite service, but they cannot handle Byzantine fault. A Byzantine fault poses a serious threat to the composite service via sending conflicting information to other component services. To mask this type of fault, Byzantine fault tolerance strategy must be adopted BIB004 . Hence, researchers keep exploring and working on this study.
Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Traditional Static Fault Tolerance Strategies. <s> Along with the standardization of Web services composition language and the widespread acceptance of composition technologies, Web services composition is becoming an efficient and cost-effective way to develop modern business applications. As Web services are inherently unreliable, how to deliver reliable Web services composition over unreliable Web services is a significant and challenging problem. In this paper, we propose FACTS, a framework for fault-tolerant composition of transactional Web services. We identify a set of high-level exception handling strategies and a new taxonomy of transactional Web services to devise a fault-tolerant mechanism that combines exception handling and transaction techniques. We also devise a specification module and a verification module to assist service designers to construct fault-handling logic conveniently and correctly. Furthermore, we design an implementation module to automatically implement fault-handling logic in WS-BPEL. A case study demonstrates the viability of our framework and experimental results show that FACTS can improve fault tolerance of composite services with acceptable overheads. <s> BIB001 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Traditional Static Fault Tolerance Strategies. <s> Cloud computing is becoming a mainstream aspect of information technology. More and more enterprises deploy their software systems in the cloud environment. The cloud applications are usually large scale and include a lot of distributed cloud components. Building highly reliable cloud applications is a challenging and critical research problem. To attack this challenge, we propose a component ranking framework, named FTCloud, for building fault-tolerant cloud applications. FTCloud includes two ranking algorithms. The first algorithm employs component invocation structures and invocation frequencies for making significant component ranking. The second ranking algorithm systematically fuses the system structure information as well as the application designers' wisdom to identify the significant components in a cloud application. After the component ranking phase, an algorithm is proposed to automatically determine an optimal fault-tolerance strategy for the significant cloud components. The experimental results show that by tolerating faults of a small part of the most significant components, the reliability of cloud applications can be greatly improved. <s> BIB002 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Traditional Static Fault Tolerance Strategies. <s> Web service recommendation systems can help service users to locate the right service from the large number of available web services. Avoiding recommending dishonest or unsatisfactory services is a fundamental research problem in the design of web service recommendation systems. Reputation of web services is a widely-employed metric that determines whether the service should be recommended to a user. The service reputation score is usually calculated using feedback ratings provided by users. Although the reputation measurement of web service has been studied in the recent literature, existing malicious and subjective user feedback ratings often lead to a bias that degrades the performance of the service recommendation system. In this paper, we propose a novel reputation measurement approach for web service recommendations. We first detect malicious feedback ratings by adopting the cumulative sum control chart, and then we reduce the effect of subjective user feedback preferences employing the Pearson Correlation Coefficient. Moreover, in order to defend malicious feedback ratings, we propose a malicious feedback rating prevention scheme employing Bloom filtering to enhance the recommendation performance. Extensive experiments are conducted by employing a real feedback rating data set with 1.5 million web service invocation records. The experimental results show that our proposed measurement approach can reduce the deviation of the reputation measurement and enhance the success ratio of the web service recommendation. <s> BIB003 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Traditional Static Fault Tolerance Strategies. <s> Reputation plays an important role for users in choosing or paying for multimedia applications or services. Some efficient multimedia reputation-measurement approaches have been proposed to achieve accurate reputation measurement based on feedback ratings that users give to a multimedia service after invoking. However, the implementation of these approaches suffers from the problems of wide abuse and low utilization of user context. In this article, we study the relationship between user context and feedback ratings according to which one user often gives different feedback ratings to the same multimedia service in different user contexts. We further propose an enhanced user context-aware reputation-measurement approach for multimedia services that is accurate in two senses: (1) Each multimedia service has three reputation values with three different user context levels when its feedback ratings are sufficient and (2) the reputation of a multimedia service with different user context levels is found using user context sensitivity and user similarity when its feedback ratings are limited or not available. Experimental results based on a real-world dataset show that our approach outperforms other approaches in terms of accuracy. <s> BIB004 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Traditional Static Fault Tolerance Strategies. <s> We construct a Web service collaboration network.We propose a collaboration reputation concept.We present a trustworthy Web service selection method. Traditional trustworthy service selection approaches focus the overall reputation maximization of all selected services in social networks. However, the selected services barely interact with each other in history, which leads to the trustworthiness among services being very low. Hence, to enhance the trustworthiness of Web service selection, a novel concept, collaboration reputation is proposed in this paper. The collaboration reputation is built on a Web service collaboration network consisting of two metrics. One metric, invoking reputation, can be calculated according to other service's recommendation. The other metric, invoked reputation, can be assessed by the interaction frequency among Web services. Finally, based on the collaboration reputation, we present a trustworthy Web service selection method to not only solve the simple Web service selection but also the complex selection. Experimental results show that compared with other methods, the efficiency of our method and the solution's trustworthiness are both greatly increased. <s> BIB005 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Traditional Static Fault Tolerance Strategies. <s> The service-oriented paradigm is emerging as a new approach to heterogeneous distributed software systems composed of services accessed locally or remotely by middleware technology. How to select the optimal composited service from a set of functionally equivalent services with different quality of service (QoS) attributes has become an active focus of research in the service community. However, existing middleware solutions or approaches are inefficient as they search all solution spaces. More importantly, they inherently neglect QoS uncertainty owing to the dynamic network environment. In this paper, based on a service composition middleware framework, we propose an efficient and reliable service selection approach that attempts to select the best reliable composited service by filtering low-reliability services through the computation of QoS uncertainty. The approach first employs information theory and probability theory to abandon high-QoS-uncertainty services and downsize the solution space. A reliability fitness function is then designed to select the best reliable service for composited services. We experimented with real-world and synthetic datasets and compared our approach with other approaches. Our results show that our approach is not only fast, but also finds more reliable composited services. Design a service composition middleware for heterogeneous distributed systems.An efficient and reliable service selection approach based on information and variance theory is proposed.Experiments with real-world dataset show that the proposed technique is superior to other existing approaches. <s> BIB006 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Traditional Static Fault Tolerance Strategies. <s> With rapid adoption of the cloud computing model, many enterprises have begun deploying cloud-based services. Failures of virtual machines (VMs) in clouds have caused serious quality assurance issues for those services. VM replication is a commonly used technique for enhancing the reliability of cloud services. However, when determining the VM redundancy strategy for a specific service, many state-of-the-art methods ignore the huge network resource consumption issue that could be experienced when the service is in failure recovery mode. This paper proposes a redundant VM placement optimization approach to enhancing the reliability of cloud services. The approach employs three algorithms. The first algorithm selects an appropriate set of VM-hosting servers from a potentially large set of candidate host servers based upon the network topology. The second algorithm determines an optimal strategy to place the primary and backup VMs on the selected host servers with k-fault-tolerance assurance. Lastly, a heuristic is used to address the task-to-VM reassignment optimization problem, which is formulated as finding a maximum weight matching in bipartite graphs. The evaluation results show that the proposed approach outperforms four other representative methods in network resource consumption in the service recovery stage. <s> BIB007 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Traditional Static Fault Tolerance Strategies. <s> Abstract Due to stochasticity and uncertainty of malicious Web services over the Internet, it becomes difficult to select reliable services while meeting non-functional requirements in service-oriented systems. To avoid the unreliable real-world process of obtaining services, this paper proposes a novel service selection approach via two-phase decisions for enhancing the reliability of service-oriented systems. In the first-phase decision, we define the problem of finding reliable service candidates as a multiple criteria decision making (MCDM) problem. Then, we construct a decision model to address the problem. In the second-phase decision, we define the problem of selecting services based on non-functional requirements as an optimization problem. Finally, we propose a convex hull based approach for solving the optimization problem. Large-scale and real-world experiments are conducted to show the advantages of the proposed approach. The evaluation results confirm that our approach achieves higher success rate and less computation time to guarantee the reliability when compared to the other state-of-the-art approaches. <s> BIB008
Besides functional requirements, nonfunctional requirements (or QoS constraints, e.g., total execution time should be less than 10 s) should be satisfied in a composite service design. However, component service providers only provide the average QoS values or even incorrect values to improve utilization, which would lead to the violation of QoS constraints. That is to say, there will be a fault. To avoid this situation, component service with high certainty and high reputation should be chosen in selection phase BIB004 BIB003 . To select the component services with the highest certainty for composite service, a reliable and efficient approach is put forward in BIB006 . Firstly, the approach adopts the probability theory and information theory to filter component services with low certainty. Then a reliable fitness function is devised via using 0-1 integer programming. Finally, the component services with the highest certainty are selected based on the fitness function. According to the collaboration reputation, a service selection approach is proposed in BIB005 to select the trustworthy component service. The collaboration reputation is constructed on a component service collaboration network that includes two metrics. One metric is invoking reputation, which can be calculated via the recommendation of other component services. The other metric is invoked reputation, which can be calculated according to the interaction frequency among component services. Finally, a trustworthy component service selection algorithm is put forward based on collaboration reputation. To improve the fault tolerance of the composite service, a novel service selection approach is proposed in BIB008 . The approach consists of two decision phases. In the first decision phase, the finding of reliable component service is defined as a multiple criteria decision-making problem. And a decision model is constructed to address this problem. In the second decision phase, service selection problem is formulated as an optimization problem based on QoS requirements, and a convex hull approach is presented to solve this optimization problem. In BIB001 , a fault tolerant framework that is referred to as FACTS is proposed for composite service. To design a fault tolerant mechanism that combines exception handling and transaction techniques, this paper identifies a set of high level exception handling strategies and presents a new taxonomy of transactional component services. Moreover, two modules (a specification module and a verification module) are also designed for assisting service designers in constructing fault handling logic conveniently and correctly. Component service ranking is another approach for fault tolerance. In BIB002 , FTCloud, a component service ranking framework, is put forward. Firstly, the framework employs two ranking algorithms. The first algorithm adopts invocation structures and frequencies of component service to make significant component ranking. The other ranking algorithm recognizes the significant component services from all composite services by fusing the system structure information and the designer's wisdom of application. After the component service ranking phase, a selection algorithm for optimal fault tolerance strategy is proposed, which can automatically supply optimal fault tolerance strategy for the significant components. Traditional static fault tolerant strategies are usually employed in the design phase of composite service, so the key research issue of them is not the execution time reduction but the accuracy improvement BIB007 . Meanwhile, for aforementioned strategies that are only adopted in the design phase, their effectiveness during the execution is another key research issue. To our knowledge, there are few strategies that consider both accuracy and effectiveness during the execution.
Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Byzantine Fault Tolerance Strategies. <s> Many Web services are expected to run with high degree of security and dependability. To achieve this goal, it is essential to use a Web-services compatible framework that tolerates not only crash faults, but Byzantine faults as well, due to the untrusted communication environment in which the Web services operate. In this paper, we describe the design and implementation of such a framework, called BFT-WS. BFT-WS is designed to operate on top of the standard SOAP messaging framework for maximum interoperability. It is implemented as a pluggable module within the Axis2 architecture, as such, it requires minimum changes to the Web applications. The core fault tolerance mechanisms used in BFT-WS are based on the well-known Castro and Liskov's BFT algorithm for optimal efficiency. Our performance measurements confirm that BFT-WS incurs only moderate runtime overhead considering the complexity of the mechanisms. <s> BIB001 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Byzantine Fault Tolerance Strategies. <s> Mission-critical services must be replicated to guarantee correctness and high availability in spite of arbitrary (Byzantine) faults. Traditional Byzantine fault tolerance protocols suffer from several major limitations. Some protocols do not support interoperability between replicated services. Other protocols provide poor fault isolation between services leading to cascading failures across organizational and application boundaries. Moreover, traditional protocols are unsuitable for applications with tiered architectures, long-running threads of computation, or asynchronous interaction between services. We present Perpetual, a protocol that supports Byzantine fault-tolerant execution of replicated services while enforcing strict fault isolation. Perpetual enables interaction between replicated services that may invoke and process remote requests asynchronously in long-running threads of computation. We present a modular implementation, an Axis2 Web Services extension, and experimental results that demonstrate only a moderate overhead due to replication. <s> BIB002 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Byzantine Fault Tolerance Strategies. <s> We present a lightweight Byzantine fault tolerance (BFT) algorithm, which can be used to render the coordination of web services business activities (WS-BA) more trustworthy. The lightweight design of the BFT algorithm is the result of a comprehensive study of the threats to the WS-BA coordination services and a careful analysis of the state model of WS-BA. The lightweight BFT algorithm uses source ordering, rather than total ordering, of incoming requests to achieve Byzantine fault tolerant, state-machine replication of the WS-BA coordination services. We have implemented the lightweight BFT algorithm, and incorporated it into the open-source Kandula framework, which implements the WS-BA specification with the WS-BA-I extension. Performance evaluation results obtained from the prototype implementation confirm the efficiency and effectiveness of our lightweight BFT algorithm, compared to traditional BFT techniques. <s> BIB003 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Byzantine Fault Tolerance Strategies. <s> Detection and elimination of Byzantine faults in the Web services environment by applying the features of SOAP handlers is the principle objective of this work. The Web services may sometimes be infused with suspicious modules intentionally for them to behave in an abnormal manner. By introducing faulty aspects into the service deployment on the fly makes the Web service to generate Byzantine faults. The application servers too shall be infected with malicious aspect codes for generating Byzantine faults and therefore all the deployed services may be-come victim for dispatching erroneous response. In the proposed work, SOAP handlers are induced into the Web application server for manipulating the clients request message for detecting the occurence of Classes that behave in an abnormal manner in both the application servers and in service deployments. The service that is being affected by the presence of malicious classes is deactivated, till the faulty service is replaced or repaired by the original service provider. When these type of SOAP handlers are induced into the Web application servers, the reliability of the service is increased and thereby the clients are guaranteed with error free response. It is also observed that when SOAP handlers are introduced in detecting Byzantine faults, there was no performance degradation either in server level or at service level. <s> BIB004
During the execution of composite service, a failed component service may send conflicting information to another component service, which constitutes various threats to the consistency of composite service. This type of fault is known as Byzantine fault . To mask Byzantine fault during the execution phase, the composite service must employ a fault tolerance strategy in the design phase BIB004 . In recent years, some scholars engage in studying Byzantine fault tolerance strategy. To tolerate Byzantine faults of composite service, a framework, BFT-WS, is designed and used in BIB001 . Firstly, BFT-WS adopts the standard technology of composite service (i.e., SOAP) to construct Byzantine fault tolerance service. Employing standard technology can ensure the interoperability of component services. BFT-WS is designed as a pluggable module. Therefore, the implementation of BFT-WS needs minimum change to the composite service. Finally, the key fault tolerance schemes employed in BFT-WS are designed based on the notable Castro and Liskov's Byzantine fault tolerance approach. A practical algorithm, Perpetual, is proposed in BIB002 . Perpetual can tolerate Byzantine faults of deterministic ntier composite service. Interaction between services with different number of replica is allowed in Perpetual. In addition, Perpetual supports not only long-running active threads of computation but also asynchronous invocation and processing. Therefore, Perpetual can improve performance and flexibility over other protocols. To make the coordination of Web Services Business Activities (WS-BA) more trustworthy, a lightweight Byzantine fault tolerance algorithm is put forward in BIB003 . Depending on careful study of the threats of the WS-BA coordination services and comprehensive analysis of the state model, the algorithm is lightweight designed. In order to implement Byzantine fault tolerance and state machine replication of the WS-BA coordination services, the algorithm uses source ordering rather than total ordering. To orchestrate delivery of reliable composite services, a hybrid asynchronous Byzantine fault tolerant protocol, GEMINI, is proposed in . Firstly, GEMINI decomposes composite services' abstract workflows from its implementation because it sustains dynamic components provisioning. Then, GEMINI guarantees the reliability of service delivery modules via a lightweight Byzantine fault tolerant protocol. Moreover, GEMINI invokes multiple component services concurrently to realize component service redundancy. Finally, GEMINI employs a single leader Byzantine faults tolerance technology to optimize the current Byzantine fault tolerant protocol. To handle Byzantine fault, group communication is obligatory among the component service replicas. However, if the traffic between different replicas of component service is very heavy, the response time of a component service may remarkable increase. That is because component services are usually distributed on the Internet. So a key research issue of the Byzantine fault tolerant is reducing the response time of component service. Meanwhile, component service replicas are usually provided by different service providers. Therefore, how to guarantee seamless communication between replicas is another key research issue.
Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategies <s> Web services are gaining acceptance as a standards-based approach for integrating loosely coupled services often distributed over a network. Hence, achieving high levels of reliability and availability in spite of service or infrastructure failures poses unique set of challenges. However, current Web services middleware provide limited constructs for specifying faults detection and recovery actions. Additionally, faults-handling logic often gets scattered and tangled with the service logic. Consequently, this negatively impacts maintainability and adaptability. To address these requirements for reliable and fault tolerant Web services execution, we propose a set extensible recovery policies to declaratively specify how to handle and recover from typical faults in Web services composition. The identified constructs were incorporated into a lightweight service management middleware named MASC (Manageable and Adaptive Service Composition) to transparently enact the fault management policies and facilitate the monitoring, configuration and control of managed services. Several experimental results with a service based supply chain management system illustrate the effectiveness of our approach to providing reliable and uninterrupted services. <s> BIB001 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategies <s> Abstract Failures during the execution of Transactional Composite Web Services (TCWSs) can be repaired by forward or back–ward recovery processes, according to the component WSs transactional properties. In previous works, we presented TCWS fault tolerant execution approaches relying on WSs replacement, on a compensation protocol, and on unrolling processes of Colored Petri-Nets (CPNs) to support forward and backward recovery. We represent a TCWS and its corresponding backward recovery process by CPNs. Even though these recovery processes ensure system consistency, backward recovery means that users do not get the desired answer to their queries and forward recovery could imply long waiting time for users to finally get the desired response. In this paper, we present an alternative fault tolerant approach in which, in case of failures, the unrolling process of the CPN controlling the execution of a TCWS is check–pointed and the execution flow goes on as much as it is possible. In this way, users can have partial responses as soon as they are received and can re-submit the checkpointed CPN to re-start its execution from an advanced point of execution (checkpoint). We present the checkpointing algorithm integrated to our previous work. <s> BIB002 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategies <s> During the execution of Composite Web Services (CWS), a component Web Service (WS) can fail and can be repaired with strategies such WS retry, substitution, compensation, roll-back, replication, or checkpointing. Each strategy behaves differently on different scenarios, impacting the CWS QoS. We propose a non intrusive dynamic fault tolerant model that analyses several levels of information: environment state, execution state, and QoS criteria, to dynamically decide the best recovery strategy when a failure occurs. We present an experimental study to evaluate the model and determine the impact on QoS parameters of different recovery strategies; and evaluate the intrusiveness of our strategy during the normal execution of CWSs. <s> BIB003 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategies <s> Owing to the increasing number of vehicles in vehicular cyber-physical systems (VCPSs) and the growing popularity of various services or applications for vehicles, cellular networks are being severely overloaded. Offloading mobile data traffic through Wi-Fi or a vehicular ad hoc network (VANET) is a promising solution for partially solving this problem because it involves almost no?monetary?cost. We propose combination optimization to facilitate mobile data traffic offloading in emerging VCPSs to reduce the amount of mobile data traffic for the QoS-aware service provision. We investigate mobile data traffic offloading models for Wi-Fi and VANET. In particular, we model mobile data traffic offloading as a multi-objective optimization problem for the simultaneous minimization of mobile data traffic and QoS-aware service provision; we use mixed-integer programming to obtain the optimal solutions with the global QoS guarantee. Our simulation results confirm that our scheme can offload mobile data traffic by up to 84.3% while satisfying the global QoS guarantee by more than 70% for cellular networks in VCPSs. A Wi-Fi and VANET offloading model is established and the offloading capacity is quantified.A multi-objective combinatorial problem for mobile data traffic offloading is formulated and optimized.The proposed offloading approach of Vehicular Cyber-Physical Systems (VCPSs) supports global QoS guarantee and service provisioning. <s> BIB004 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategies <s> We construct a Web service collaboration network.We propose a collaboration reputation concept.We present a trustworthy Web service selection method. Traditional trustworthy service selection approaches focus the overall reputation maximization of all selected services in social networks. However, the selected services barely interact with each other in history, which leads to the trustworthiness among services being very low. Hence, to enhance the trustworthiness of Web service selection, a novel concept, collaboration reputation is proposed in this paper. The collaboration reputation is built on a Web service collaboration network consisting of two metrics. One metric, invoking reputation, can be calculated according to other service's recommendation. The other metric, invoked reputation, can be assessed by the interaction frequency among Web services. Finally, based on the collaboration reputation, we present a trustworthy Web service selection method to not only solve the simple Web service selection but also the complex selection. Experimental results show that compared with other methods, the efficiency of our method and the solution's trustworthiness are both greatly increased. <s> BIB005 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategies <s> Composition of web services has emerged as a fast growing field of research since an atomic service in its entirety is not capable to perform a specific task. Composition of web services is a process where a set of web services, heterogeneous in nature, are clubbed together in order to perform a specific task. Individually, Component web services may be performing well as far as Quality of Service (QoS) is concerned but the core issue is that while composing, do they satisfy Users requirements in terms of QoS? Computation of QoS while composing web services appears to be a big challenge. A lot of research work in this regard, has already been undertaken to come out with new, innovative and credible solutions for the same. ::: This Paper presents a thorough review-study of different frameworks, architectures, methodologies and algorithms suggested by different researchers in their efforts to compute the overall QoS while composing web services. Moreover, Effectiveness of different methods in terms of QoS while composing is also presented. <s> BIB006 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategies <s> The evolution of business software technologies is constant and is becoming increasingly complex which leads to a great probability of software/hardware failures. Business processes are built based on web services as they allow the creation of complex business functionalities. To attack the problem of failures presented by the use of web services, organizations are extrapolating the autonomic computing paradigm to their business processes as it enables them to detect, diagnose, and repair problems improving dependability. Sophisticated solutions that increase system dependability exist, however, those approaches have drawbacks; for example, they affect system performance, have high implementation costs, and or they may jeopardize the scalability of the system. To facilitate evolution to self-management, systems must implement the monitoring, analyzing, planning, and execution (MAPE) control loop. An open challenge for MAPE loop is to carry out in an efficient manner the diagnosis and decision-making processes, recollecting data from which the system can detect, diagnose, and repair potential problems. Also, dealt by systems dependability, specifically as fault tolerant mechanisms. One useful tool for this purpose is the communication induced checkpointing (CiC). We use CiC in attacking the dependability problem of using web services in a distributed and efficient manner. First, we present an approach for web services compositions that supports fault tolerance based on the CiC mechanism. Second, we present an algorithm aimed at web services compositions based on an autonomic computing and checkpointing mechanism. Experimental results support the feasibility of this concept proposal. <s> BIB007
A component service may fail during the execution of composite service. The fault must be repaired via dynamic fault tolerance strategies; otherwise, it will lead to the failure of composite service. The current dynamic fault tolerance strategies include forward recovery, backward recovery, and checkpoint, which are illustrated in Figure 2 . To ensure the whole composite service in a consistent state even suffering from fault, it is necessary to provide component services with transactional property (all or nothing (every component service of composite service must either be executed successfully or have no effect whatsoever)). Backward recovery and forward recovery are two basic fault tolerance strategies supported by component service's transactional properties. If the faulty component service can be retried BIB001 , replicated BIB007 , or substituted BIB006 , forward recovery is allowed. If the effects produced by the faulty component service need to be compensated BIB004 , backward recovery is allowed BIB003 . However, users need to wait a long time to get the desired response when forward recovery is adopted, and users are unable to get the desired answer to their queries when backward recovery is adopted BIB002 . Taking checkpoint is another dynamic fault tolerance strategy. Current execution state and partial results are taken as a snapshot, which is returned to the user when a fault occurs. The checkpointed composite service can be restarted from the latest saved state, and the aggregated transactional attributes are not affected BIB003 . The recent researches of the dynamic fault tolerance are discussed in the following sections. Different dynamic fault tolerance strategies need to be adopted for the different faults that occur during the execution of the composite service, and some scholars have specifically studied dynamic fault tolerance strategies selection BIB005 . Therefore, the main literature about it is presented in Section 4.4.
Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Forward Recovery. <s> This paper proposes a solution based on forward error recovery, oriented towards providing dependability of composite Web services. While exploiting their possible support for fault tolerance (e.g., transactional support at the level of each service), the proposed solution has no impact on the autonomy of the individual Web services, our solution lies in system structuring in terms of co-operative atomic actions that have a well-defined behavior, both in the absence and in the presence of service failures. More specifically, we define the notion of Web Service Composition Action (WSCA), based on the Coordinated Atomic Action concept, which allows structuring composite Web services in terms of dependable actions. Fault tolerance can then be obtained as an emergent property of the aggregation of several potentially non-dependable services. We further introduce a framework enabling the development of composite Web services based on WSCAs, consisting of an XML-based language for the specification of WSCAs. <s> BIB001 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Forward Recovery. <s> Due to the rapid acceptance of web services and its fast spreading, a number of mission-critical systems will be deployed as web services in next years. The availability of those systems must be guaranteed in case of failures and network disconnections. An example of web services for which availability will be a crucial issue are those belonging to coordination web service infrastructure, such as web services for transactional coordination (e.g., WS-CAF and WS-Transaction). These services should remain available despite site and connectivity failures to enable business interactions on a 24x7 basis. Some of the common techniques for attaining availability consist in the use of a clustering approach. However, in an Internet setting a domain can get partitioned from the network due to a link overload or some other connectivity problems. The unavailability of a coordination service impacts the availability of all the partners in the business process. That is, coordination services are an example of critical components that need higher provisions for availability. In this paper, we address this problem by providing an infrastructure, WS-Replication, for WAN replication of web services. The infrastructure is based on a group communication web service, WS-Multicast, that respects the web service autonomy. The transport of WS-Multicast is based on SOAP and relies exclusively on web service technology for interaction across organizations. We have replicated WS-CAF using our WS-Replication framework and evaluated its performance. <s> BIB002 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Forward Recovery. <s> Redundancy-based fault tolerance strategies are proposed for building reliable Service-Oriented Architectures/Applications (SOA), which are usually developed on the unpredictable remote Web services. This paper proposes and implements a distributed replication strategy evaluation and selection framework for fault tolerant Web services. Based on this framework, we provide a systematic comparison of various replication strategies by theoretical formula and real-world experiments. Moreover, a user participated strategy selection algorithm is designed and verified. Experiments are conducted to illustrate the advantage of this framework. In these experiments, users from six different locations all over the world perform evaluation of Web services distributed in six countries. Over 1,000,000 test cases are executed in a collaborative manner and detailed results are also provided. <s> BIB003 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Forward Recovery. <s> 2 Abstract: Businesses offer complex services to the users, which can't be provided by a single Web Service. A Composite Web Service provides more complicated function, by composing multiple Web services. A composite service is more susceptible to failure than an atomic service. During the execution of a Composite Web Service, if one Component Service fails or becomes unavailable, the whole Composite Web Service fails. A middle agent (broker) simplifies the interaction between service providers and service requester and fulfills the user's need. The broker composes a desired value-added service and orchestrates the execution of Web Services. A replacement policy has been proposed in this paper that replaces the subset of Web Services that contains failed Web Service with another equivalent subset. During the execution, if a failure occurs, subsets containing failed Web Service are identified. Subsequently the subsets equivalent to failed one are identified. These equivalent subsets are ranked as per the policy and the best subset is selected. The old subset is replaced with the new equivalent subset in the Composite Web Service. <s> BIB004 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Forward Recovery. <s> Many data-intensive services (e.g., planet analysis, gene analysis, and so on) are becoming increasingly reliant on national cloud data centers (NCDCs) because of growing scientific collaboration among countries. In NCDCs, tens of thousands of virtual machines (VMs) are assigned to physical servers to provide data-intensive services with a quality-of-service (QoS) guarantee, and consume a massive amount of energy in the process. Although many VM placement schemes have been proposed to solve this problem of energy consumption, most of these assume that all the physical servers are homogeneous. However, the physical server configurations of NCDCs often differ significantly, which leads to varying energy consumption characteristics. In this paper, we explore an alternative VM placement approach to minimize energy consumption during the provision of data-intensive services with a global QoS guarantee in NCDCs. We use an improved particle swarm optimization algorithm to develop an optimal VM placement approach involving a tradeoff between energy consumption and global QoS guarantee for data-intensive services. Experimental results show that our approach significantly outperforms other approaches to energy optimization and global QoS guarantee in NCDCs. <s> BIB005 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Forward Recovery. <s> Scalable computing resources are provided via the Internet in the cloud computing environment. A growing number of application providers begin to deploy their applications in cloud to save the infrastructure maintaince cost. The probability of node failures cannot be nontrivial due to a great quantity of nodes in the cloud data center. To address the problem, the virtual machine replication technique is extensively adopted in the cloud system to enhance the application/service reliability. K-fault tolerance is a typical replication strategy employed in cloud. However, currently proposed K-fault tolerance replication strategies cannot achieve the best effect due to the ignorance of switch failure. In this paper, we study to design a (m, n)-fault tolerance virtual machine placement algorithm to solve the problem. Firstly, we formulate the problem as an integer linear programming problem, and prove that the problem is NP-hard. Secondly, we extensively employ differential evolution (DE) algorithm to solve the integer linear programming problem. Finally, experiments are conducted to study the effectiveness of our algorithm, and the simulation results demonstrate that our algorithm outperforms other algorithms in reliability enhancement. <s> BIB006 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Forward Recovery. <s> Abstract With the broad adoption of service-oriented architecture, many software systems have been developed by composing loosely-coupled Web services. Service discovery, a critical step of building service-based systems (SBSs), aims to find a set of candidate services for each functional task to be performed by an SBS. The keyword-based search technology adopted by existing service registries is insufficient to retrieve semantically similar services for queries. Although many semantics-aware service discovery approaches have been proposed, they are hard to apply in practice due to the difficulties in ontology construction and semantic annotation. This paper aims to help service requesters (e.g., SBS designers) obtain relevant services accurately with a keyword query by exploiting domain knowledge about service functionalities (i.e., service goals) mined from textual descriptions of services. We firstly extract service goals from services’ textual descriptions using an NLP-based method and cluster service goals by measuring their semantic similarities. A query expansion approach is then proposed to help service requesters refine initial queries by recommending similar service goals. Finally, we develop a hybrid service discovery approach by integrating goal-based matching with two practical approaches: keyword-based and topic model-based. Experiments conducted on a real-world dataset show the effectiveness of our approach. <s> BIB007 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Forward Recovery. <s> Recently, mobile applications have become increasingly computation-intensive. However, the energy and the computing capabilities of mobile devices, such as smartphones and tablets, are limited. Mobile cloud computing is becoming a powerful way to tackle this challenge. Offloading computation-intensive tasks to nearby cloudlets can significantly save energy and enhance the computation capabilities of mobile devices. However, determining how to assign task requests to cloudlets while minimising the response time remains a challenging issue. The traditional approach cannot achieve the optimal effect since it ignores the task characteristics and the communication characteristic between the cloudlets. To address this challenge, in this paper, we provide an efficient algorithm for task request assignment that shortens the response time and reduces the network resource consumption. We evaluate the performance of the proposed task request assignment algorithm through experimental simulations. Simulation results demonstrate that the proposed algorithm is promising. <s> BIB008
For forward recovery, the composite service tries to fix the fault without stopping execution. Retry, replication, and substitution can be used for forward recovery BIB007 . A solution based on forward recovery is proposed in BIB001 BIB006 . Fault can be repaired by the substitution. A substitution policy is proposed in BIB004 , which substitutes a subset of component services (includes failed component service) with another equivalent subset. When a fault occurs, all subsets containing the failed component service are identified. Then the subsets that are equivalent to the failed one are determined. Finally, the equivalent subsets are ranked, and the failed subset is substituted by the best equivalent subset. Replication creates redundant component services (replicas) for composite service. When a request from the user is assigned to all replicas, the technology is called active replication. Otherwise, only one replica acts as the primary one that responds to the request, and the backup replica takes over only after the primary one fails. The technology is called passive replication . WS-Replication, a framework for seamless replication of composite services, is proposed in BIB002 . To increase the service availability, the framework permits the deployment of a component service in a set of sites. One of the standout features of WS-Replication is that replication is done concerning component service autonomy and only SOAP is used to interact across sites. What is more, WS-Multicast (one of the major components of WS-Replication) can also be used as a self-governed component for reliable multicast in a component service setting BIB008 . In BIB003 , a distributed replication strategy evaluation and selection framework for fault tolerant composite service is proposed. Based on the proposed framework, various replication strategies are compared by using the theoretical formula and experimental results. Moreover, a strategy selection algorithm based on both objective performance information and subjective requirements of users is proposed. Each of the aforementioned strategies has its own advantages and disadvantages and is employed for specific fault tolerance scenarios. The composite service developer should first analyze the requirements of the user and the possible fault scenario and then select appropriate strategy BIB005 .
Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Backward Recovery. <s> Along with the standardization of Web services composition language and the widespread acceptance of composition technologies, Web services composition is becoming an efficient and cost-effective way to develop modern business applications. As Web services are inherently unreliable, how to deliver reliable Web services composition over unreliable Web services is a significant and challenging problem. In this paper, we propose FACTS, a framework for fault-tolerant composition of transactional Web services. We identify a set of high-level exception handling strategies and a new taxonomy of transactional Web services to devise a fault-tolerant mechanism that combines exception handling and transaction techniques. We also devise a specification module and a verification module to assist service designers to construct fault-handling logic conveniently and correctly. Furthermore, we design an implementation module to automatically implement fault-handling logic in WS-BPEL. A case study demonstrates the viability of our framework and experimental results show that FACTS can improve fault tolerance of composite services with acceptable overheads. <s> BIB001 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Backward Recovery. <s> Web Services (WSs) that provide transactional properties are useful to guarantee reliable Composite WSs (CWSs) execution. In this paper, we propose a framework for efficient, fault tolerant, and correct distributed execution of Transactional CWSs (TCWSs). Our framework relies on WSs replacement and on a compensation protocol to support forward and backward recovery. We represent a TCWS and its corresponding backward recovery process by Colored Petri-Nets (CPNs) and, to ensure correct execution and compensation flows, unfolding processes of the CPNs are followed. We formalize the TCWS execution and recovery processes based on CPN properties. We also present the framework architecture and execution and recovery algorithms. <s> BIB002 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Backward Recovery. <s> In this paper, we propose a service-oriented reliability model that dynamically calculates the reliability of composite web services with rollback recovery based on the real-time reliabilities of the atomic web services of the composition. Our model is a hybrid reliability model based on both path-based and state-based models. Many reliability models assume that failure or error arrival times are exponentially distributed. This is inappropriate for web services as error arrival times are dependent on the operating state including workload of servers where the web service resides. In this manuscript, we modify our previous model (for software based on the Doubly Stochastic Model and Renewal Processes) to evaluate the reliability of atomic web services. In order to fix our idea, we developed the case of one simple web service which contains two states, i.e., idle and active states. In real-world applications, where web services could contain quite a large number of atomic services, the calculus as well as the computing complexity increases greatly. To limit our computing efforts and calculus, we chose the bounded set techniques that we apply using the previously developed stochastic model. As a first type of system combination, we proposed to study a scheme based on combining web services into parallel and serial configurations with centralized coordination. In this case, the broker has an acceptance testing mechanism that examines the results returned from a particular web service. If it was acceptable, then the computation continues to the next web service. Otherwise, it involves rollback and invokes another web service already specified by a checkpoint algorithm. Finally, the acceptance test is conducted using the broker. The broker can be considered as a single point of failure. To increase the reliability of the broker introduced in our systems and mask out errors at the broker level, we suggest a modified general scheme based on Triple modular redundancy and N-version programming. To imitate a real scenario where errors could happen at any stage of our application and improve the quality of Service QoS of the proposed model, we introduce fault-tolerance techniques using an adaption of the recovery block technique. <s> BIB003 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Backward Recovery. <s> In distributed software contexts, Web Services (WSs) that provide transactional properties are useful to guarantee reliable Transactional Composite WSs (TCWSs) execution and to ensure the whole system consistent state even in presence of failures. Failures during the execution of a TCWS can be repaired by forward or backward recovery processes, according to the component WSs transactional properties. In this paper, we present the architecture and an implementation of a framework, called FaCETa, for efficient, fault tolerant, and correct distributed execution of TCWSs. FaCETa relies on WSs replacement, on a compensation protocol, and on unrolling processes of Colored Petri-Nets to support failures. We implemented FaCETa in a Message Passing Interface (MPI) cluster of PCs in order to analyze and compare the behavior of the recovery techniques and the intrusiveness of the framework. <s> BIB004 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Backward Recovery. <s> One of the key interests in web services is the ability to compose them in order to build more powerful and complex ones running in an interoperable and distributed setting. Several languages, like BPEL, that describe such services have been proposed. Similar to the usual complex systems, web service compositions may exhibit inappropriate behaviors in the presence of failures. Compensation mechanisms are available to express running services recovery in case of failures. This paper addresses the problem of the correct design of web service compositions in case of failures. It presents a novel correct-by-construction formal approach based on refinement using the Event-B method. The proposed approach defines a compensation mechanism to repair failed services at runtime. It addresses not only behavioral aspects but also, functional ones through the introduction of repairing invariants whose persistence is enforced during compensation at runtime. Different compensation scenarios and modes are addressed. A formal model for equivalent, degraded and upgraded service compensations relying on the Event-B formalization is defined. The proposal is illustrated on a case study. <s> BIB005
When a fault occurs, backward recovery should be adopted if the effects need be compensated BIB005 . Some scholars employed exception handling strategies to realize the backward recovery. For example, Liu et al. BIB001 present a framework named FACTS for fault tolerance of transactional composite service. FACTS combines exception handling and transaction techniques to improve fault tolerance of composite services. Firstly, the framework identifies a set of high level exception handling strategies. Then, a specification module is designed to help service designers to construct correct fault-handling logic. Finally, a module is devised to automatically implement fault-handling logic in WS-BPEL. An efficient framework for fault tolerance of transactional composite service is proposed in BIB002 . For recovery from fault, the framework realizes a backward recovery method based on unfolding processes of Coloured Petri-Nets. The framework can be realized in distributed/shared memory system. According to the transactional properties of component service, a framework, called FaCETa, is proposed in BIB004 . FaCETa employs service replacement and Coloured Petri-Nets' unrolling processes to tolerate fault. Besides, experimental results show that FaCETa efficiently realizes fault tolerant strategies for the transactional composite service with small overhead. An approach that dynamically calculates the composite service's reliability to improve the performance of backward recovery is proposed in BIB003 . Firstly, a model of reliability is presented according to the doubly stochastic model and renewal processes. Then, to help the calculation of complex composite services, a bounded set strategy is briefly presented. Finally, a fault tolerance model is constructed via backward recovery block techniques. Guillaume et al. BIB005 focus on checking the correctness of compensation via invariant preservation. Therefore, a correct-by-construction approach, which uses the Event-B algorithm to deal with runtime compensation, is put forward based on refinement and proof. The approach can be used as a foundational module for the compensation of run-time composite service. Meanwhile, a formal model is defined for equivalent, degraded, and upgraded service compensations. Backward recovery needs to go back to a consistent state to repair the fault correctly. Therefore, a key issue of it is how to save the execution state of the composite service. In addition, how to look for an alternative execution path from the consistent state is another key issue of backward recovery.
Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Checkpoint. <s> This paper proposes a solution for strong mobility of composed Web services. In fact, strong mobility enables a running BPEL process to be migrated from a host to another and to be resumed on the destination host starting from a previous execution state called also checkpoint which avoids the high overhead of restarting the composed Web service in case of interruption of the BPEL process. The proposed solution makes use of Aspect-Oriented Programming (AOP) in order to enable dynamic capture and recovery of a BPEL process state. This will enable the choose, at runtime, of the instant of the checkpoint and the technique for enacting it. Thus, the proposed approach may be used for self-healing and self-adaptivity of composed Web services acting in case of failure or QoS violation. An experimentation has been performed on a Travel agency case study deployed on the AO4BPEL engine. It shows the efficiency and the usability of our approach. <s> BIB001 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Checkpoint. <s> The global transactional property of a Transactional Composite Web Service (TCWS) allows recovery processes if a Web Service (WS) fails during the execution process. The following actions can be performed if a WS fails: retry the faulty WS, substitute the faulty WS, or compensate the executed WSs. In consequence, these fault-tolerance mechanisms ensure the atomicity property of a TCWS with an all-or-nothing endeavor. In this paper, we present a formal definition of a checkpointing approach based in Colored Petri-Nets (CPNs) properties, in which the execution process and the actions performed in case of failures rely on unrolling processes of CPNs. Our checkpointing approach allows to relax the atomic transactional property of a TCWS in case of failures. The all-or-nothing transactional property becomes to the something-to-all property. A snapshot of the most possible advanced partial result is taken in case of failures and it is returned to the user (user gets something), providing the possibility of restarting the TCWS from an advanced execution state to complete the result (user gets all later), without affecting its original transactional property. We present the execution algorithms with the additionally capacity of taking snapshot in case of failures and experimental results to show the reception of partial outputs due to the relaxation of the all-or-nothing property. <s> BIB002 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Checkpoint. <s> Abstract Failures during the execution of Transactional Composite Web Services (TCWSs) can be repaired by forward or back–ward recovery processes, according to the component WSs transactional properties. In previous works, we presented TCWS fault tolerant execution approaches relying on WSs replacement, on a compensation protocol, and on unrolling processes of Colored Petri-Nets (CPNs) to support forward and backward recovery. We represent a TCWS and its corresponding backward recovery process by CPNs. Even though these recovery processes ensure system consistency, backward recovery means that users do not get the desired answer to their queries and forward recovery could imply long waiting time for users to finally get the desired response. In this paper, we present an alternative fault tolerant approach in which, in case of failures, the unrolling process of the CPN controlling the execution of a TCWS is check–pointed and the execution flow goes on as much as it is possible. In this way, users can have partial responses as soon as they are received and can re-submit the checkpointed CPN to re-start its execution from an advanced point of execution (checkpoint). We present the checkpointing algorithm integrated to our previous work. <s> BIB003 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Checkpoint. <s> The ACID (Atomicity, Consistency, Isolation, andDurability) model has played a cornerstone role in servicecomposition to guarantee that Composite Services (CSs) havetransactional support and consistent outcomes ("all-or-nothing" property). However, the classical "all-or-nothing" model is toorestrictive for loosely coupled and distributed environments asInternet. Some approaches have been proposed to relax atomicitybased on transactional properties of services, using compensationmechanisms or providing checkpointing techniques. In this article, we propose a model that measures the fuzzy atomicity of acomposite service based on transactional properties and on thecheckpointing mechanism, relaxing the "all-or-nothing" propertyinto a new a fuzzy "all-something-or-(almost) nothing" property. The proposed measure takes into account the acceptable fuzzyatomicity expressed in the user requirements (i.e., the minimumresult that user can accept), but also the state of the compositeservice execution. As far as we know, no such a model exists. <s> BIB004
Checkpoint refers to execution states of composite service gathered by orchestration in a certain time, and the composite service can return to a previous specific state for fault tolerance . Marzouk et al. BIB001 propose a flexible approach for composite service's execution. The approach synchronizes all flow branches of the composite service. Then a recovery state that permits saving a consistent checkpoint is constructed. When a fault or a QoS violation occurs, the failed process or a subset of running instance may be migrated to another server and restarted according to the checkpoint image. The traditional "all-or-nothing" is too restrictive for composite service. Checkpoint techniques can relax the atomicity based on the transactional properties of component service. Based on checkpoint and transactional properties, a model that measures the fuzzy atomicity of composite service is presented in BIB004 . "All-or-nothing" attribute is relaxed into a fuzzy "all-something-or-almost-nothing" attribute. Based on Coloured Petri-Nets, a checkpoint approach is proposed in BIB002 . If a fault occurs, the approach relaxes the all-or-nothing attribute by executing a transactional composite Web service as much as possible and taking a snapshot of faulted state. In other words, the approach returns partial answers to the user as soon as possible. According to the snapshot, the user can resume the composite service without dropping the work previously done. In BIB003 , the unfolding processes of the Coloured PetriNets that control the execution of a transactional composite Web service are checkpointed if a fault occurs. In such way, users can first get partial responses as soon as they are obtained, and the composite service can be restarted from an advanced point of execution.
Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategy Selection. <s> Based on the framework of service-oriented architecture (SOA), complex distributed systems can be dynamically and automatically composed by integrating distributed Web services provided by different organizations, making dependability of the distributed SOA systems a big challenge. In this paper, we propose a QoS-aware fault tolerant middleware to attack this critical problem. Our middleware includes a user-collaborated QoS model, various fault tolerance strategies, and a context-aware algorithm in determining optimal fault tolerance strategy for both stateless and stateful Web services. The benefits of the proposed middleware are demonstrated by experiments, and the performance of the optimal fault tolerance strategy selection algorithm is investigated extensively. As illustrated by the experimental results, fault tolerance for the distributed SOA systems can be efficient, effective and optimized by the proposed middleware. <s> BIB001 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategy Selection. <s> An increasing amount of today's software systems is developed by dynamically composing available atomic services to form a single service that responds to consumers' demand. These composite services are distributed across the network, adapted dynamically during run-time, and still required to work correctly and be available on demand. The development of these kinds of modern services requires new modeling and analysis methods and techniques to enable service reliability during run-time. In this paper, we define the required phases of the composite service design and execution to achieve reliable composite service. These phases are described in the form of a framework. We perform a literature survey of existing methods and approaches for reliable composite services to find out how they match with the criteria of our framework. The contribution of the work is to reveal the current status in the research field of reliable composite service engineering. <s> BIB002 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategy Selection. <s> Service-oriented computing and cloud computing are playing critical roles in supporting business collaboration over the Internet. Thanks to the latest development in computing technologies, various large-scale, evolving, and rapidly growing service ecosystems emerge. However, service failures greatly hamper the usability and reputation of service ecosystems. In the previous work, service failure is not adequately studied from an ecosystem's perspective. To address this gap, we propose a service failure analysis framework based on a complex network model of service ecosystem. This framework comprises a feature model of failed services and several service failure impact indicators. By applying the framework, empirical analysis of failed service features and failure impact assessment can be implemented more easily and precisely. Moreover, to provide failure tolerance strategies for service ecosystems, a novel composition-based service substitution method is designed to replace the failed services with functional similar ones, such that the service systems are more robust when a failure occurs. As the new substitution method requires fewer structural data of services, it is more convenient to be applied in present RESTful Representational State Transfer REST service environment. Both the framework and the service substitution method are tested on real-world data set, and their usability and efficiency are demonstrated. Copyright © 2014 John Wiley & Sons, Ltd. <s> BIB003 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategy Selection. <s> Functionally equivalent web services can be composed to form more reliable service-oriented systems. However, the choice of fault tolerance strategy can have a significant effect on the quality-of-service (QoS) of the resulting service-oriented systems. In this paper, we investigate the problem of selecting an optimal fault tolerance strategy for building reliable service-oriented systems. We formulate the user requirements as local and global constraints and model the selection of fault tolerance strategy as an optimization problem. A heuristic algorithm is proposed to efficiently solve the optimization problem. Fault tolerance strategy selection for semantically related tasks is also investigated in this paper. Large-scale real-world experiments are conducted to illustrate the benefits of the proposed approach. The experimental results show that our problem modeling approach and the proposed selection algorithm make it feasible to manage the fault tolerance of complex service-oriented systems both efficiently and effectively. <s> BIB004 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategy Selection. <s> Web services have attracted much attention from distributed application designers and developers because of their roles in abstraction and interoperability among heterogeneous software systems, and a growing number of distributed software applications have been published as Web services on the Internet. Faced with the increasing numbers of Web services and service users, researchers in the services computing field have attempted to address a challenging issue, i.e., how to quickly find the suitable ones according to user queries. Many previous studies have been reported towards this direction. In this paper, a novel Web service discovery approach based on topic models is presented. The proposed approach mines common topic groups from the service-topic distribution matrix generated by topic modeling, and the extracted common topic groups can then be leveraged to match user queries to relevant Web services, so as to make a better trade-off between the accuracy of service discovery and the number of candidate Web services. Experiment results conducted on two publicly-available data sets demonstrate that, compared with several widely used approaches, the proposed approach can maintain the performance of service discovery at an elevated level by greatly decreasing the number of candidate Web services, thus leading to faster response time. <s> BIB005 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategy Selection. <s> Reliability is critical for choosing, ranking and composing Web services. However, some common situations, such as fault-tolerant strategies and the dynamic operational profile, are not considered in existing reliability analysis. To solve these problems, a tree-based composition structure model is proposed, which is called the Fault-tolerant Composite Web Services Tree (FCWS-T). We separate the nodes in FCWS-T into two types, namely the control nodes and the service nodes, leading to the representation of various composition structures can be explicitly performed. Then, a reliability simulation method is proposed based on FCWS-T and it can effectively analyze the reliability of a complex Web service. Experiments on a financial management service show the effectiveness of our approach for fault-tolerant Web service compositions. <s> BIB006 </s> Overview on Fault Tolerance Strategies of Composite Service in Service Computing <s> Dynamic Fault Tolerance Strategy Selection. <s> A service oriented architecture platform consists of distributed web services (WSes). A system needs dynamic composition of WSes. Reliability of a composition process requires detection of faults and tolerance to the faults. A composition process can end abruptly due to occurrence of faults. Present paper proposes a quality of service (QoS) based fault-detection and fault-tolerance approach using the dynamic orchestration. The approach also considers the QoS of WSes and user preferences at runtime. The approach uses reliability in two-phases. Firstly, a trust based web-service filtering mechanism is used. This achieves reliability at the component level before a fault occurs. Next, whenever a fault detects in a process, a decision for dynamic recovery is taken. That is based on the optimum QoS ranking of the WSes. The steps in the proposed approach provide the reliability at the component as well as composition level. They work for all the orchestration based composition models. An implementation followed by experimental-study shows that the proposed approach produces timely as well as optimal results in the presence of the faults. <s> BIB007
Different types of faults may happen during the execution of the composite service. Therefore, different fault tolerance strategies should be employed to recover them BIB007 . There are some literatures that study how to select the most appropriate fault tolerance strategy BIB003 . The fault tolerance strategy selection has a significant effect on the QoS of composite service BIB005 . Therefore, Zheng et al. BIB004 investigated the problem of selecting an optimal fault tolerance strategy for building reliable composite services. They formulated the user's requirements as local constraints and global constraints and modelled the fault tolerance strategy selection as an optimization problem. A heuristic algorithm is presented to efficiently solve the optimization problem. In BIB001 , a QoS-aware fault tolerant middleware is proposed to make the dependability of composite service. The middleware includes a user-collaborated QoS model, a set of fault tolerance strategies, and a context-aware algorithm that (dynamically and automatically) determines the optimal fault tolerance strategy for both stateful and stateless composite services. To maintain the required QoS even in the presence of fault, a novel approach is proposed in BIB002 . This approach builds on the top of the execution system of composite service and carries out the QoS monitoring. The result of QoS monitoring determines the selection of the fault tolerance strategy in case of fault. To select appropriate fault tolerance strategy, Shu et al. BIB006 considered that the reliability of composite services must be analyzed. They proposed a tree-based composition structure model called the Fault-Tolerant Composite Web Service Tree (FCWS-T). Firstly, nodes in FCWS-T are separated into two types, which are control nodes and service nodes. Then, a reliable simulation method is put forward based on FCWA-T, and it can efficiently analyze the reliability of a complex composite service. Finally, an appropriate fault tolerance strategy is selected according to the reliability. Using priority selector and fault handler, an approach of fault tolerance for service oriented architecture is put forward in . Firstly, the approach selects the first priority level scheme quickly when a fault has been detected. If the fault cannot be handled, the second priority level scheme is selected by a fault handler for average performance. Otherwise, the lowest priority level scheme is employed to handle the fault.
Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> INTRODUCTION <s> For 100 years, there has been no change in the basic structure of the electrical power grid. Experiences have shown that the hierarchical, centrally controlled grid of the 20th Century is ill-suited to the needs of the 21st Century. To address the challenges of the existing power grid, the new concept of smart grid has emerged. The smart grid can be considered as a modern electric power grid infrastructure for enhanced efficiency and reliability through automated control, high-power converters, modern communications infrastructure, sensing and metering technologies, and modern energy management techniques based on the optimization of demand, energy and network availability, and so on. While current power systems are based on a solid information and communication infrastructure, the new smart grid needs a different and much more complex one, as its dimension is much larger. This paper addresses critical issues on smart grid technologies primarily in terms of information and communication technology (ICT) issues and opportunities. The main objective of this paper is to provide a contemporary look at the current state of the art in smart grid communications as well as to discuss the still-open research issues in this field. It is expected that this paper will provide a better understanding of the technologies, potential advantages and research challenges of the smart grid and provoke interest among the research community to further explore this promising research area. <s> BIB001 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> INTRODUCTION <s> Information and communication technologies (ICT) represent a fundamental element in the growth and performance of smart grids. A sophisticated, reliable and fast communication infrastructure is, in fact, necessary for the connection among the huge amount of distributed elements, such as generators, substations, energy storage systems and users, enabling a real time exchange of data and information necessary for the management of the system and for ensuring improvements in terms of efficiency, reliability, flexibility and investment return for all those involved in a smart grid: producers, operators and customers. This paper overviews the issues related to the smart grid architecture from the perspective of potential applications and the communications requirements needed for ensuring performance, flexible operation, reliability and economics. <s> BIB002 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> INTRODUCTION <s> The smart grid is widely considered to be the informationization of the power grid. As an essential characteristic of the smart grid, demand response can reschedule the users' energy consumption to reduce the operating expense from expensive generators, and further to defer the capacity addition in the long run. This survey comprehensively explores four major aspects: 1) programs; 2) issues; 3) approaches; and 4) future extensions of demand response. Specifically, we first introduce the means/tariffs that the power utility takes to incentivize users to reschedule their energy usage patterns. Then we survey the existing mathematical models and problems in the previous and current literatures, followed by the state-of-the-art approaches and solutions to address these issues. Finally, based on the above overview, we also outline the potential challenges and future research directions in the context of demand response. <s> BIB003 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> INTRODUCTION <s> Abstract The current globalization is faced by the challenge to meet the continuously growing worldwide demand for capital and consumer goods by simultaneously ensuring a sustainable evolvement of human existence in its social, environmental and economic dimensions. In order to cope with this challenge, industrial value creation must be geared towards sustainability. Currently, the industrial value creation in the early industrialized countries is shaped by the development towards the fourth stage of industrialization, the so-called Industry 4.0. This development provides immense opportunities for the realization of sustainable manufacturing. This paper will present a state of the art review of Industry 4.0 based on recent developments in research and practice. Subsequently, an overview of different opportunities for sustainable manufacturing in Industry 4.0 will be presented. A use case for the retrofitting of manufacturing equipment as a specific opportunity for sustainable manufacturing in Industry 4.0 will be exemplarily outlined. <s> BIB004 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> INTRODUCTION <s> There is a large number of European Union (EU) projects that deal with Smart Grids research and deployment. Overall, they provide a substantial amount of knowledge that can be mined to gain useful insights for future projects and on-going roll-outs of Smart Grid related utilities. In the current paper, we focus on Smart Meters and we evaluate different communication-related architectural styles within an Advanced Metering infrastructure (AMI). In particular, we derive from the Joint Research Centre (JRC) Smart Grids projects review three different layouts for Smart Meters two-way communication: i) mobile Peer-to-Peer (P2P), ii) data concentrator-supported, and iii) gateway-supported. After the discussion about the architectural styles, we look at how common such choices are within EU projects deployments. Overall, we found predominance of both gateway / data concentrator architectures over P2P mobile communication layouts. The main outcome of the paper is a mapping of the three architectural styles to the deployments within the selected EU projects. Based on the map, we debate about the implications of such deployments within the current and future Smart Grids context. <s> BIB005 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> INTRODUCTION <s> The successful transformation of conventional power grids into Smart Grids (SG) will require robust and scalable communication network infrastructure. The SGs will facilitate bidirectional electricity flow, advanced load management, a self-healing protection mechanism and advanced monitoring capabilities to make the power system more energy efficient and reliable. In this paper SG communication network architectures, standardization efforts and details of potential SG applications are identified. The future deployment of real-time or near-real-time SG applications is dependent on the introduction of a SG compatible communication system that includes a communication protocol for cross-domain traffic flows within the SG. This paper identifies the challenges within the cross-functional domains of the power and communication systems that current research aims to overcome. The status of SG related machine to machine communication system design is described and recommendations are provided for diverse new and innovative traffic features. <s> BIB006 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> INTRODUCTION <s> Smart grids (SGs) have a central role in the development of the global power sector. Cost-benefit analyses and environmental impact assessments are used to support policy on the deployment of SG systems and technologies. However, the conflicting and widely varying estimates of costs, benefits, greenhouse gas (GHG) emission reduction, and energy savings in literature leave policy makers struggling with how to advise regarding SG deployment. Identifying the causes for the wide variation of individual estimates in the literature is crucial if evaluations are to be used in decision-making. This paper (i) summarizes and compares the methodologies used for economic and environmental evaluation of SGs (ii) identifies the sources of variation in estimates across studies, and (iii) point to gap in research on economic and environmental analyses of SG systems. Seventeen studies (nine articles and eight reports published between 2000 and 2015) addressing the economic costs versus benefits, energy efficiency, and GHG emissions of SGs were systematically searched, located, selected, and reviewed. Their methods and data were subsequently extracted and analysed. The results show that no standardized method currently exists for assessing the economic and environmental impacts of SG systems. The costs varied between 0.03 and 1143M€/yr, while the benefits ranged from 0.04 to 804M€/yr, suggesting that SG systems do not result in cost savings The primary energy savings ranged from 0.03 to 0.95MJ/kWh, whereas the GHG emission reduction ranged from 10 to 180gCO2/kWh, depending on the country grid mix and the system boundary of the SG system considered. The findings demonstrate that although SG systems are energy efficient and reduce GHG emissions, investments in SG systems may not yield any benefits. Standardizing some methodologies and assumptions such as discount rates, time horizon and scrutinizing some key input data will result in more consistent estimates of costs and benefits, GHG emission reduction, and energy savings. <s> BIB007 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> INTRODUCTION <s> Abstract The increasing penetration of renewable energy on the transmission and distribution power network is driving the adoption of two-way power flow control, data and communications needed to meet the dependency of balancing generation and load. Thus, creating an environment where power and information flow seamlessly in real time to enable reliable and economically viable energy delivery, the advent of Internet of Energy (IoE) as well as the rising of Internet of Things (IoT) based smart systems. The evolution of IT to IoT has shown that an information network can be connected in an autonomous way via routers from operating system (OS) based computers and devices to build a highly intelligent eco-system. Conceptually, we are applying the same methodology to the IoE concept so that Energy Operating System (EOS) based assets and devices can be developed into a distributed energy network via energy gateway and self-organized into a smart energy eco-system. This paper introduces a laboratory based IIoT driven software and controls platform developed on the NICE Nano-grid as part of a NICE smart system Initiative for Shenhua group. The goal of this effort is to develop an open architecture based Industrial Smart Energy Consortium (ISEC) to attract industrial partners, academic universities, module supplies, equipment vendors and related stakeholder to explore and contribute into a test-bed centric open laboratory template and platform for next generation energy-oriented smart industry applications. In the meanwhile, ISEC will play an important role to drive interoperability standards for the mining industry so that the era of un-manned underground mining operation can become the reality as well as increasing safety regulation enforcement. <s> BIB008 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> INTRODUCTION <s> Abstract In today's ecosystem of energy management, the contribution of Internet of Things (IoT) to smart grids has acquired immense potential due to its multi-faceted advantages in various fields. IoT paves a way to associate and virtually control everything in almost every domain of society. Conversely, the smart grid framework attracted the attention of the universal research community and the idea of merging IoT with smart grid together demonstrates enormous growth potential. This review paper highlights the most significant research works that focus on applying IoT to smart grids. This work also addresses many innovative approaches used in IoT and smart grids along with their respective applications in various fields. The objective of this work is to benefit scientists and new entrants in the field of IoT and smart grids opens up awareness for new interdisciplinary research. <s> BIB009
The fourth industrial revolution, known as Industry 4.0, has received massive attention and extensive discussion from researchers and manufacturers. It is based on the use of smart production processes and close to full automation. It aims to enhance the manufacturing process, enable rapid growth of industry and provide supply and demand integration. The Internet of Things (IoT), big data analytics, Cyber Physical Systems (CPS) and automation are the major components of industry 4.0. People and resources are now moving away from a central approach toward the process of decentralized production BIB008 . With the constant increase of electricity prices, changes of earth's climate and the exhaustion of energy resources, the traditional power grid does not have the capacity to counter the increased demand for power to support advanced technology and industrial innovation BIB001 . The smart grid (SG) emerged as a suitable solution for addressing these challenges; Smart Grids enable the fourth stage of the industrial revolution, known as Smart Grid Industry 4.0 (SGI 4.0). In SGI 4.0, Information and Communications Technologies (ICT) can play a major role in increasing reliability, stability and efficiency compared to the traditional grid . The concept of industry 4.0 is not limited to the factory but also encompasses the entire life cycle of the product, from production, supplier to the end user. The automation of this life cycle can be achieved by the utilization of ICTs such as IoT, cloud computing, machine learning and CPS. Smart grids play a role in every step of this product life cycle. Figure 1 shows the specific function of the smart grid to supply energy for the smart factory throughout the life cycle of the product between elements of different systems. 657 keeps track of all the relevant information about the flow of electricity in the plant. Data provided by smart meters and consumption statistics, can be used via machine learning to provide advanced decision making. Figure 1 . Product life cycle BIB004 There are currently several Smart grid projects, in the European Union BIB005 , China, US, and many other countries all at varying levels of integration and ranging in scale. It is expected that more countries will move in that direction since SGs can decrease carbon dioxide (CO2) and other greenhouse gas emissions. The global movement away from fossil-fuel toward renewable environmentally friendly, energy resources (green energy) such as wind, vibration, solar, etc. is enabled by SG systems BIB007 . It is also capable of reducing waste in energy production while increasing reliability by providing energy level that closely match demand instead of producing more than required causing unnecessary waste, or less than needed it causing shortages BIB003 . With the utilization machine learning, SG are enable to make important decisions based on the demand for energy, such as pricing in real-time, automated maintenance, scheduling of power usage and optimization of energy consumption. This allows Smart Grids to improve efficiency and enable power systems to operate independently with less human interference. Thus, in a Smart grid system there is a realtime interconnection between worker, customer and supplier with information exchange, building a high flexible power generation model [8] . The relationship between Industry 4.0 and SG can be symbyotic, where the smart grid can support effiecint energy systems that enable the industry. The industry can in turn provide more compatible enviroment for smart energy consumption. The shift from traditional Power Grid (PG) to the next generation of SGI 4.0 systems is a difficult task and requires robust designing and compatible infrastructure of communications network, to overcome any drawbacks related to the existing PG BIB002 . Furthermore, SGs are usually a combination of several systems and technologies, making it an intersting area of research and development. For researchers interested in this topic, it is necessary to have a general understanding of all these different aspects. Thus in this paper we will discuss the main concepts regarding SG, its architecture, enabling ICTs and current communications technologies used in SG from the perspective of industry 4.0. There are some surveys and studies about SG BIB008 BIB007 BIB002 BIB009 BIB006 that cover a wide range of topics such as communications networks, architecture, security issues, or impact. However most of them are focused on one or two aspects, and they don't cover the context of Industry 4.0. Therefore, the purpose of this study is to provide researchers with a broad basic understanding of SG and offer resoucers to further explore the topic of SG, by discussing SGI 4.0, its architecture, advances in ICTs that enable it and challenges involved with its implementation, as well as future research direction in the field of SGI 4.0. The rest of this paper is structured as follows: Section II presents a detailed overview of SGI 4.0 paradigm architecture while section III discusses the components of smart grids, focusing on IoT, CPS and cloud computing. Section IV presents an overview of communications technologies in SGI 4.0 while challenges with SGI4.0 were discussed in section V. Finally, sections VI and VII provide future research directions and conclusions.
Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> SMART GRID ARCHITECTURE <s> Smart Grid (SG) provides enhancement to existing grids with two-way communication between the utility, sensors, and consumers, by deploying smart sensors to monitor and manage power consumption. However due to the vulnerability of SG, secure component authenticity necessitates robust authentication approaches relative to limited resource availability (i.e. in terms of memory and computational power). SG communication entails optimum efficiency of authentication approaches to avoid any extraneous burden. This systematic review analyses 27 papers on SG authentication techniques and their effectiveness in mitigating certain attacks. This provides a basis for the design and use of optimized SG authentication approaches. <s> BIB001 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> SMART GRID ARCHITECTURE <s> Abstract This paper deals with smart grid concept and its reliability in presence of renewable energies. Around the globe an adjustment of electric energy is required to limit CO2 gas emission, preserve the greenhouse, limit pollution, fight climate change and increase energy security. Subsequently renewable energy expansion is the real test for designers and experts of smart grid system. This initiative has made significant progress toward the modernization and growth of the electric utility infrastructure and aims to integrate it into today’s advanced communication era, both in function and in architecture. The study is focused on the difference between a conventional grid and a smart grid concept and the integration of renewable energy in a smart grid system where grid control is a must for energy management. Assuring a good grid reliability, taking the right control measures in order to preserve continuous electricity supply for the customers are challenges highlighted in the present paper. <s> BIB002
Designing a robust communications network infrastructure is important for SGI4.0 to achieve reliable and efficient operation. There are multiple SG architecture models but they all follow a similar multilayered structure BIB002 Defines the three main layers as: a. Power systems layer: responsible for generating and delivering electrical energy to the users, similar to a traditional PG. b. Communications layer: which provides interconnection between all the system components by collecting data from sensors and end user interfaces to transmit them to data centers and vice versa. c. Applications layer: in this layer Information is processed to issue monitoring and control messages, as well as using the data for applications such as demand management, automatic meter reading and detection of fraud or misuse. The three layers are illustrated in Figure 2 . SG systems depend on the use of a number of networks that vary in size and location BIB001 , such as: a. Home Area Network (HAN) or customer area network, which connects smart appliances and devices to a smart meter inside the house. They have a short range and capable of reliably communicating with low data rates. Thus, they are to lower implementation costs and energy consumption. b. Building Area Network, (BAN): similar to a HAN but covers larger buildings and can consist of multiple smaller networks. c. Industry Area Network (IAN), like BANs but more complex and specialized for factories and industrial buildings. d. Neighborhood area Network (NAN) responsible for connecting HANs, BANs, and IANs to WAN and metering data aggregation from thousands of smart meters. Due to the larger scope, NANs require higher data rates. e. Wide Area Network (WAN), it is utilized by NANs to forward the electricity reports to the main control center. WANs require very high data rate and long coverage distance. Optical networks are commonly used as communications medium.
Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Internet of Things <s> Utilizing Internet of Things (IoT) technology in smart grid is an important approach to speed up the informatization of power grid system, and it is beneficial for effective management of the power grid infrastructure. Disaster prevention and reduction of power transmission line is one of the most important application fields of IoT. Advanced sensing and communication technologies of IoT can effectively avoid or reduce the damage of natural disasters to the transmission lines, improve the reliability of power transmission and reduce economic loss. Focused on the characteristic of the construction and development of smart grid, this paper introduced the application of IoT in online monitoring system of power transmission line. <s> BIB001 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Internet of Things <s> Abstract In today's ecosystem of energy management, the contribution of Internet of Things (IoT) to smart grids has acquired immense potential due to its multi-faceted advantages in various fields. IoT paves a way to associate and virtually control everything in almost every domain of society. Conversely, the smart grid framework attracted the attention of the universal research community and the idea of merging IoT with smart grid together demonstrates enormous growth potential. This review paper highlights the most significant research works that focus on applying IoT to smart grids. This work also addresses many innovative approaches used in IoT and smart grids along with their respective applications in various fields. The objective of this work is to benefit scientists and new entrants in the field of IoT and smart grids opens up awareness for new interdisciplinary research. <s> BIB002
A network that connects any object to the internet via an exchange protocols to communicate monitoring, management, tracking and identification information between different smart devices is known as the Internet of things IoT). It has become the focus of research in various applications over the last couple of years, and has allowed for connecting a multitude of network embedded devices used in daily life to the internet. IoT is also considered revolutionary in that it adds functionality to existing network systems and allow them to provide solutions to time critical applications in many fields such as healthcare, manufacturing, logistics, military retail etc. . IoT technology plays a key role in SGs, as it enables the transfer of data between the various components of SG efficiently. Blackout prevention and power reduction are IoT systems' primary applications in a smart grid. IoT, and advanced sensing technologies are required for lowering the impact of natural disasters, improving reliability of energy transmission lines and reducing further loss of power BIB001 . For the SG to achieve successful connectivity between users and applications some components are required, such as: sensors, smart energy meters, smart inverters for applications using solar energy, grid monitoring controls, substation feeders and network interfaces, etc. All of these components collectively work to transfer data accurately in real time BIB002 .
Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Cyber Physical Systems <s> The smart grid a new generation of standard power distribution grid. The communication infrastructure is critical for the successful operation of the modern smart grids. The use of communication technologies ensures the reduction of energy consumption, optimal operation of the smart grid and coordination between all smart grids' components from generation to the end users. This paper presents an overview of existing communication technologies such as ZigBee, WLAN, cellular communication, WiMAX, Power Line Communication (PLC), their implementation in smart grids, advantages and disadvantages. Moreover, the paper shows comparison of communication infrastructure between the legacy grid and the smart grid and smart grid communication standards. The paper also presents research challenges and future trends in communication systems for smart grid application. <s> BIB001 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Cyber Physical Systems <s> This paper provides a study of the smart grid projects realised in Europe and presents their technological solutions with a focus on smart metering Low Voltage (LV) applications. Special attention is given to the telecommunications technologies used. For this purpose, we present the telecommunication technologies chosen by several European utilities for the accomplishment of their smart meter national roll-outs. Further on, a study is performed based on the European Smart Grid Projects, highlighting their technological options. The range of the projects analysed covers the ones including smart metering implementation as well as those in which smart metering applications play a significant role in the overall project success. The survey reveals that various topics are directly or indirectly linked to smart metering applications, like smart home/building, energy management, grid monitoring and integration of Renewable Energy Sources (RES). Therefore, the technological options that lie behind such projects are pointed out. For reasons of completeness, we also present the main characteristics of the telecommunication technologies that are found to be used in practice for the LV grid. <s> BIB002
A Cyber-Physical System (CPS) is a system that integrates cyber and physical elements effectively. An ideal CPS consists of a computing system, networking tools, and physical components such as sensors. The physical aspects of a smart grid is monitored by controllers ; by connecting the sensors via a communications network, it is possible to keep track of the overall status of the smart grid and its working conditions, the sensors send all relevant data to the controllers to take action BIB001 . Smart grids integrate the electricity network infrastructure (physical systems) and the cyber systems (sensors, actuators,…etc.), and show CPS characteristics such as virtual and real world integration in a dynamic environment where different scenarios from the power grid (physical system) are fed to CPS for simulation mode adjustment to influence how the physical system performs in future times BIB002 . CPS technologies allow the smart grid to perform real-time analysis and measurement to improve decision making capability and improve power consumption, safety and cost reduction.
Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Cloud Computing <s> Smart Grid Network (SGN) is one of the innovative trends towards efficient and intelligent use of the conventional and unconventional resources of energy with respect to electric power generation, transmission and distribution. Future smart grids are expected to have reliable, efficient, secured, and cost-effective power management with the implementation of distributed architecture. To focus on these requirements, in this paper we provide a comprehensive research on different cloud computing applications for the smart grid architecture, in three different areas-energy management, cloud monitoring and security. In these areas, the utility of cloud computing applications is discussed, while giving directions on future opportunities for the development of the smart grid. <s> BIB001 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Cloud Computing <s> With the rapid pace in the evolution and development of technology, the demand of electrical energy is also increasing. Beside the production of energy from traditional and renewable energy sources, the energy management is also required to control the consumption of energy in commercial, industrial and residential houses. Improvement in technologies while reduction in cost has enabled consumers to interconnect the smart devices for reducing cost and energy consumption, this is called internet of things (IoTs). Such increase in the number of smart systems and energy management systems cause a huge amount of data which cannot be processed on traditional system. It requires high computing power and high storage which may be provided by cloud computing. Cloud computing provide resources to customers on demand with low investment and operational cost. The cloud resources are flexible, efficient, scalable and secure. In this paper we simulate the use of cloud computing in smart grid. The datacenters in cloud collect the building’s data, process it and send the results to the building. In this study, we calculate the total response time to each building, the number of requests coming from each building per our, the processing time of each datacenter and the cost of each datacenter (CRRP). The results are useful for energy service providers to select the optimal processing and data storage resources. <s> BIB002
Cloud computing provides applications and services with data storage and processing capability over the internet, Cloud omputing pools resources to eliminate the need for dedicated physical systems and can allow for higher levels of automation. According to BIB001 this can offer 3 main benefits: First, Infrastructure cost reduction since the resources are already in place. Second Outsoursinng maintenance, which lowers both cost and risk. Third Scalability and ease of implementation, since upgrades can be applied without disruption. Cloud computing is built around offering 'things' as services, which gives us the three main modules of Cloud services BIB002 : a. Infrastructure as Service (IaaS): In this model virtual infrastructure is provided for a range of functiounality, most notiably data storage, to contain the huge amount of user data and virtual machines that serve as data centers for the grid. b. Platform as a Service (Paas) : This model offers developers resources to develop applications and run them on virtual platforms. Since the cloud has unlimited resouces this can simplfy software developmet and expand the range of applications that can be developed for the grid. c. Software as a Service (SaaS) : This model gives users an interface to built in cloud applications. This could open the door to developing smart grid applications that involve user input and prefernces in energy distribution by giving them an interface to interact with the grid, without any configuration or instalation. Cloud computing has been instrumental in allowing the smart grid to achieve real time data storage and processing it also reduces the costs required to the grid to expand without compromising availability, and it will continue to play a role in its development moving forward.
Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Data Transmission Technology <s> The smart grid a new generation of standard power distribution grid. The communication infrastructure is critical for the successful operation of the modern smart grids. The use of communication technologies ensures the reduction of energy consumption, optimal operation of the smart grid and coordination between all smart grids' components from generation to the end users. This paper presents an overview of existing communication technologies such as ZigBee, WLAN, cellular communication, WiMAX, Power Line Communication (PLC), their implementation in smart grids, advantages and disadvantages. Moreover, the paper shows comparison of communication infrastructure between the legacy grid and the smart grid and smart grid communication standards. The paper also presents research challenges and future trends in communication systems for smart grid application. <s> BIB001 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Data Transmission Technology <s> This paper provides a study of the smart grid projects realised in Europe and presents their technological solutions with a focus on smart metering Low Voltage (LV) applications. Special attention is given to the telecommunications technologies used. For this purpose, we present the telecommunication technologies chosen by several European utilities for the accomplishment of their smart meter national roll-outs. Further on, a study is performed based on the European Smart Grid Projects, highlighting their technological options. The range of the projects analysed covers the ones including smart metering implementation as well as those in which smart metering applications play a significant role in the overall project success. The survey reveals that various topics are directly or indirectly linked to smart metering applications, like smart home/building, energy management, grid monitoring and integration of Renewable Energy Sources (RES). Therefore, the technological options that lie behind such projects are pointed out. For reasons of completeness, we also present the main characteristics of the telecommunication technologies that are found to be used in practice for the LV grid. <s> BIB002 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Data Transmission Technology <s> This chapter focuses on the fundamentals and technical characteristics which will bring up significant effects on the interference and solutions. Since it is much easier for ZigBee to be interfered with by WiFi, this chapter will discuss ZigBee more. <s> BIB003 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Data Transmission Technology <s> Abstract In recent years, there have been significant developments in the research on 5th Generation (5G) networks. Several enabling technologies are being explored for the 5G mobile system era. The aim is to evolve a cellular network that is intrinsically flexible and remarkably pushes forward the limits of legacy mobile systems across all dimensions of performance metrics. All the stakeholders, such as regulatory bodies, standardization authorities, industrial fora, mobile operators and vendors, must work in unison to bring 5G to fruition. In this paper, we aggregate the 5G-related information coming from the various stakeholders, in order to i) have a comprehensive overview of 5G and ii) to provide a survey of the envisioned 5G technologies; their development thus far from the perspective of those stakeholders will open up new frontiers of services and applications for next-generation wireless networks. <s> BIB004 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Data Transmission Technology <s> The Internet of Things (IoT) is defined as a paradigm in which objects equipped with sensors, actuators, and processors communicate with each other to serve a meaningful purpose. In this paper, we survey state-of-the-art methods, protocols, and applications in this new emerging area. This survey paper proposes a novel taxonomy for IoT technologies, highlights some of the most important technologies, and profiles some applications that have the potential to make a striking difference in human life, especially for the differently abled and the elderly. As compared to similar survey papers in the area, this paper is far more comprehensive in its coverage and exhaustively covers most major technologies spanning from sensors to applications. <s> BIB005 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Data Transmission Technology <s> The evolution of the Internet of Things (IoT) has been highly based on the advances on wireless communications and sensing capabilities of smart devices, along with a, still increasing, number of applications that are being developed which manage to cover various small and more important aspects of every people’s life. This chapter aims at presenting the wireless technologies and protocols that are used for the IoT communications, along with the main architectures and middleware that have been proposed to serve and enhance the IoT capabilities and increase its efficiency. Finally, since the generated data that are spread in an IoT ecosystem might include sensitive information (e.g., personal medical data by sensors), we will also discuss the security and privacy hazards that are introduced from the advances in the development and application of an IoT environment. <s> BIB006 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Data Transmission Technology <s> The Internet of Things (IoT) concept has recently been presented as the next revolution and a part of the internet of the future. IoT consists of billions of uniquely identified smart objects ‘things’ with communication ability over the internet. These smart objects generate useful information for the provision of better services and improve the quality of life in different domains. However, there is a general lack of an integrated vision or a plan for selection and implementation of these smart objects. This paper presents the required tools and technologies (such as sensors, RFID, processors, actuators and communication models) that are essential in the IoT paradigm. The major objective of this review is to provide more detailed information about IoT smart objects, the building block of IoT, and conduct a comprehensive overview of IoT architecture with different scenarios. Moreover, this paper gives insight into IoT networking and the new type of low power wide area network (LPWAN) leading technologies in the licensed and unlicensed spectrum. Finally, this paper provides perspectives for future research and developments. <s> BIB007
Data transmission technologies are essential in a smart grid to deliver user data and instruction message between the various parts of the grid. These can be wired such as Fiber, or wireless, such as LPWAN, Zigbee, etc. Wireless technologies are usually preferred due to their simplicity and ease of implementation, but they are vulnerable to interference and require constant power charging or battery replacement BIB001 . a. Fiber: Fiber optics is a high-speed, high-capacity data transmission medium, it is quite costly to implement, so it is only used in situations where a high data rate for long distances is essential, that is why it's most commonly used in backbone networks BIB002 . b. Cellular Network: commonly used for smart phones, cellular technology use microwaves and triangulation to transmit data, the 3rd and 4th generation, known as 3G and 4G respectively are still widely used for mobile communication, however: 5G, or the fifth generation of cellular technology is more promising for current and future applications in the smart grid it was developed to deliver high data rates combined with low message return time (latency) as law as one microsecond, and to support for time critical systems BIB004 , which makes it ideal for SG. 5G is also a still evolving technology with high potential for future advancement in the next few years, it is expected to become more adaptable for smart grid metering applications and other WAN and NAN applications BIB001 . c. Zigbee: A wireless technology that commonly used in wireless sensor networks (WSN) and has been used in smart meter applications , Zigbee devices use a relay system to transfer message from one device to the other, it supports multiple network topologies and is a low-energy, low-cost solution, with the disadvantage of having a low bit rate, which is acceptable when it comes to smart meters communicating with HANs, since these are low range applications BIB003 . d. Low Power Wide Area Network (LPWAN) The smart grid requires the transfer of data on a large scale particularly between NANs and WAN, LPWAN protocols have the ability to transfer data in an energy efficient way, at low data rates, and for long distances, the two most commonly used LPWAN protocols are LoRaWAN and Sigfox BIB007 . LoRaWAN is a Highly scalable wide area network protocol, that allows two way communication between smart devices, its range covers 2-5Km in urban areas and up to 15 Km in suburban areas, with a data rate between 0.3 and 50 Kbps BIB006 . This protocol was specifically developed to allow multiple IoT devices and applications to communicate over long distances, so it offers multi-tenancy and multiple network domains BIB005 . Sigfox is a protocol that operates in a similar way to cellular networks. It utilizes long waves to achieve a wide range up to 1000 KM. It has a small bandwidth of 12 bytes per message, 140 messages per day, which can be a disadvantage, though it is an acceptable range for home devices like smart meters that only need to send periodical messages. In addition the low bandwidth offers the advantage of lower noise effect so the system can operate at low power, about 0.1% of that of modern cell phone devices BIB005 .
Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Integration Technology <s> Smart Services using Industrial Internet of Things (IIoT) applications are on the rise, but still more often than not, traditional industrial protocols are used to interconnect the entities of the resulting systems. These protocols are mostly not intended for functioning in such a highly interconnected environment and, therefore, often lack even the most fundamental security measures. To address this issue, this paper reasons on the security of a communications protocol, intended for Machine to machine (M2M) communications, namely the Open Platform Communications Unified Architecture (OPC UA) and exemplifies, on a smart energy system, its capability to serve as a secure communications architecture by either itself or in conjunction with traditional protocols. <s> BIB001 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Integration Technology <s> Abstract The so-called fourth industrial revolution features the application of modern Information & Communication Technology (ICT) concepts and technologies in industrial contexts to create more flexible and innovative products and services leading to new business models and added value. The emerging Industrial Internet of Things (IIoT) is one of the main results of this revolution. One of the most known and adopted communication protocol in the context of the fourth industrial revolution is OPC UA (IEC 62541). Although this standard already combines features coming from both industrial and ICT contexts, current literature presents several approaches aimed to introduce ICT enhancements into OPC UA in order to further improve its usability in industrial environments. Some of these approaches are based on the proposal to make OPC UA RESTful, due to many advantages of RESTful services in industrial settings. OPC UA is based on a client/server architecture and adopts an information model at the server side, whose access from the client side requires knowledge of a data model, whose structure is sometimes complex, creating some difficulties for a resource-constrained device acting as client. The paper proposes the definition of a web platform able to offer access to OPC UA servers through a REST architecture. The approach presented in the paper differs from other existing solutions, mainly because it allows to realise a lightweight OPC UA RESTful interface reducing the complexity of the basic knowledge to be held by a generic user of the platform. For this reason, the solution presented allows enhancement of OPC UA interoperability towards resource-constrained devices. The web platform has been implemented and the relevant code is available on GitHub. <s> BIB002
There are several protocols developed for legacy power systems that offer adaptability to employing a smart grid and allow it to integrate with existing power grids, such as the DNP3(Distributed Network Protocol) and IEC 61850 which are a legacy system communication protocols that support integration, DNP3 is used for communication between a command center and a substation. IEC 61850 on the other hand is used for communication within a substation itself, hence those two protocols are often integrated together using mapping methods . A more unified architecture was proposed by the OPC foundation, known as Open Platform Communications Unified Architecture (OPC-UA) this architecture offers a framework that provides an interface for enabling different devices, to communicate, sending and receiving command and control messages in a client\server model that is fully automated BIB002 The application of this architecture in a smart grid context was researched by BIB001 , which found it to be suitable for increasing interoperability, i.e. OPC UA's main function, furthermore it is capable of increasing security at a higher level, since this architecture supports multiple security and encryption standards.
Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Network Management Technologies <s> Abstract The promise behind the effective deployment of 5G networks is an architecture able to provide flexibility, reconfigurability and programmability in order to support, with fine granularity, a wide and heterogeneous set of 5G use cases. This dictates a radical change in the design of mobile systems which, being usually based on the use of static deployment of vendor equipment characterized by monolithic functionality deployed at specific network locations, fail in providing the above mentioned features. By decoupling network functionalities from the underlying hardware, softwarization and virtualization are two disruptive paradigms considered to be at the basis of the design process of 5G networks. This paper analyses and summarizes the role of these two paradigms in enhancing the network architecture and functionalities of mobile systems. With this aim, we analyze several 5G application scenarios in order to derive and classify the requirements to be taken into account in the design process of 5G network. We provide an overview on the recent advances by standardization bodies in considering the role of softwarization and virtualization in the next-to-come mobile systems. We also survey the proposals in literature by underlining the recent proposals exploiting softwarization and virtualization for the network design and functionality implementation of 5G networks. Finally, we conclude the paper by suggesting a set of research challenges to be investigated. <s> BIB001 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Network Management Technologies <s> The current power grid is no longer a feasible solution due to ever-increasing user demand of electricity, old infrastructure, and reliability issues and thus require transformation to a better grid also known as, smart grid (SG). The key features that distinguish SG from the conventional electrical power grid are its capability to perform two-way communication, demand side management, and real time pricing. Despite all these advantages that SG will bring, there are certain issues which are specific to SG communication (SGC) system. For instance, network management of current SG systems is complex, time consuming, and done manually. Moreover, SGC system is built on different vendor specific devices and protocols. Therefore, the current SG systems are not protocol independent, thus leading to interoperability issue. Software defined network (SDN) has been proposed to monitor and manage the communication networks globally. By separating the control plane from the data plane, SDN helps the network operators to manage the network flexibly. Since SG heavily relies on communication networks, therefore, SDN has also paved its way into the SG. By applying SDN in SG systems, efficiency and resiliency can potentially be improved. SDN, with its programmability, protocol independence, and granularity features, can help the SG to integrate different SG standards and protocols, to cope with diverse communication systems, and to help SG to perform traffic flow orchestration and to meet specific SG quality of service requirements. This paper serves as a comprehensive survey on SDN-based SGC. In this paper, we first discuss taxonomy of advantages of SDN-based SGC. We then discuss SDN-based SGC architectures, along with case studies. This paper provides an in-depth discussion on routing schemes for SDN-based SGC. We also provide detailed survey of security and privacy schemes applied to SDN-based SGC. We furthermore present challenges, open issues, and future research directions related to SDN-based SGC. <s> BIB002
The smart grid requirements of high speed and reliability has dictated the need for management protocols and technologies that manage configuration in a way that sustain efficiency throughout the grid. An example of this is SDN, (Software defined Network) A recent networking paradigm for centralized network control through an SDN controller device, that allows the local networking devices to take forwarding and routing decisions faster and in a more flexible manner, without the need for manual configuration. Employing SDNs in Smart grid systems can make them more efficient in data management and better able to react to failures or attacks, thus making it more capable of maintaining its critical functionality at all times BIB002 . Another two concepts that can improve smart grid management are Softwarization and virtualization, Softwarization is a paradigm where functionality is employed on a software level rather than hardware, this makes it easily reconfigurable, and more flexible to new additions and interacting with different technologies, Virtualization further increases flexibility by creating virtual network components (hardware, systems and resources) eliminating the need for changing the hardware components BIB001 both concepts are achievable using cloud services.
Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Communication System Challenges <s> Facing higher power interference, an intuitive strategy for ZigBee networks is to seek the opportunity in space, time, and frequency to avoid the interference. In this chapter, we will focus on the methods of interference avoidance in ZigBee networks. <s> BIB001 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Communication System Challenges <s> The electricity delivery infrastructure—consisting of power plants, transmission lines, and distribution systems—is known as the power grid. The power grid in its present form is one of the most remarkable engineering developments. The grid infrastructure has played a critical role in making electric power reach the common people in a reliable and economic way. The National Academy of Science, USA, has ranked the power grid as the most beneficial engineering innovation of the twentieth century. Power grid is a complicated and highly meshed network. The complexity of the grid has been ever increasing with the increase in electricity demand. The high reliability and power quality requirement for the digital society are challenging. The smart grid is a power grid that uses real-time measurements, two-way communication, and computational intelligence. The smart grid is expected to be safe, secure, reliable, resilient, efficient, and sustainable. Measuring devices like phasor measurement units (PMUs) can radically change the monitoring way of the grids. However, there are several challenges like deployment of sufficient number of PMUs and managing the huge amount of data. Two-way communication is an essential requirement of the smart grid. A communication system that is secure, dedicated, and capable of handling the data traffic is required. The integration of renewable sources will alter the dynamics of the grid. This situation calls for better monitoring and control at the distribution level. <s> BIB002
Research by BIB002 Summarizes communication system challenges in 3 main issues, interference, the need for common standards and data transmission rates. a. Interference can be caused by home devices' signals that interfere with smart meters, or by harmonics emission in the grid itself. Interference can be addressed using interference detection and channel switching techniques. BIB001 b. Standards for the smart grid are necessary to provide a framework for all the different components of the grid to work together. There are current efforts by various organizations, such as IEEE, or the American National Standards Institute, to develop such standards. c. Data transmission rates can be managed by choosing the correct transmission protocol for each application.
Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Big Data Challenges <s> A smart grid is an intelligent electricity grid that optimizes the generation, distribution and consumption of electricity through the introduction of Information and Communication Technologies on the electricity grid. In essence, smart grids bring profound changes in the information systems that drive them: new information flows coming from the electricity grid, new players such as decentralized producers of renewable energies, new uses such as electric vehicles and connected houses and new communicating equipments such as smart meters, sensors and remote control points. All this will cause a deluge of data that the energy companies will have to face. Big Data technologies offers suitable solutions for utilities, but the decision about which Big Data technology to use is critical. In this paper, we provide an overview of data management for smart grids, summarise the added value of Big Data technologies for this kind of data, and discuss the technical requirements, the tools and the main steps to implement Big Data solutions in the smart grid context. <s> BIB001 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Big Data Challenges <s> Smart homes generate a vast amount of data measurements from smart meters and devices. These data have all the velocity and veracity characteristics to be called as Big Data. Meter data analytics holds tremendous potential for utilities to understand customers’ energy consumption patterns, and allows them to manage, plan, and optimize the operation of the power grid efficiently. In this paper, we propose a unified architecture that enables innovative operations for near real-time processing of large fine-grained energy consumption data. Specifically, we propose an Internet of Things (IoT) big data analytics system that makes use of fog computing to address the challenges of complexities and resource demands for near real-time data processing, storage, and classification analysis. The design architecture and requirements of the proposed framework are illustrated in this paper while the analytics components are validated using datasets acquired from real homes. <s> BIB002
Big data refers to the huge amounts of data produced by modern information systems and the processing power required to analyze and store that data. It is a known concept in ICTs that introduces a number of challenges that need to be addressed. Following is a brief discussion of some of the challenges that smart grid systems face in regard to big data. a. Real-time applications: Smart Grids are meant to adapt to the consumers consumption levels, which is constantly changing, this requires real-time data collection from a large number of smart meters at varying rates. The grid must also be able to process that data and execute changes based on it in a near real-time operation. BIB002 Suggests that some of the methods developed for IoT applications to address this issue can also be applied to the smart grid. For example, using predictive algorithms on house appliances to determine the levels of data to be expected from a smart meter at any time of the day which would help allocate processing resources accordingly. b. Heterogeneous Data: The grid receives data from a number of different sources and in different formats, for example usage data, monitoring data, capacity levels, error message, authentication messages, metadata, etc. This data originates from different sources such as meters, sensors, actuators, stations, smart home devices, historical data, mobile applications and others. This is known as heterogeneity of data, meaning the grid has to handle structured, semi-structured, and unstructured data at the same time. There are several techniques to address this problem, such as data integration, data fusion and development of standardized software solutions that unify data formats across different devices BIB001 . c. Data compression and visualization: The data collected in the grid require storage and could also provide valuable analytic information, requiring extra processing. Compression methods should be efficient and work in real-time. Visualization can present the data in understandable graphs and charts. The choice of the right visualization method and presenting the data in the right way is a difficult process that needs to be considered carefully. Both compression and visualization require more research and development of standardized methods BIB001 .
Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Security Challenges <s> Data injection attacks have recently emerged as a significant threat on the smart power grid. By launching data injection attacks, an adversary can manipulate the real-time locational marginal prices to obtain economic benefits. Despite the surge of existing literature on data injection, most such works assume the presence of a single attacker and assume no cost for attack or defense. In contrast, in this paper, a model for data injection attacks with multiple adversaries and a single smart grid defender is introduced. To study the defender-attackers interaction, two game models are considered. In the first, a Stackelberg game model is used in which the defender acts as a leader that can anticipate the actions of the adversaries, that act as followers, before deciding on which measurements to protect. The existence and properties of the Stackelberg equilibrium of this game are studied. To find the equilibrium, a distributed learning algorithm that operates under limited system information is proposed and shown to converge to the game solution. In the second proposed game model, it is considered that the defender cannot anticipate the actions of the adversaries. To this end, we proposed a hybrid satisfaction equilibrium - Nash equilibrium game and defined its equilibrium concept. A search algorithm is also provided to find the equilibrium of the hybrid game. Numerical results using the IEEE 30-bus system are used to illustrate and analyze the strategic interactions between the attackers and defender. Our results show that by defending a very small set of measurements, the grid operator can achieve an equilibrium through which the optimal attacks have no effect on the system. Moreover, our results show how, at equilibrium, multiple attackers can play a destructive role towards each other, by choosing to carry out attacks that cancel each other out, leaving the system unaffected. <s> BIB001 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Security Challenges <s> The future power system will be an innovative administration of existing power grids, which is called smart grid. Above all, the application of advanced communication and computing tools is going to significantly improve the productivity and consistency of smart grid systems with renewable energy resources. Together with the topographies of the smart grid, cyber security appears as a serious concern since a huge number of automatic devices are linked through communication networks. Cyber attacks on those devices had a direct influence on the reliability of extensive infrastructure of the power system. In this survey, several published works related to smart grid system vulnerabilities, potential intentional attacks, and suggested countermeasures for these threats have been investigated. <s> BIB002 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Security Challenges <s> When we talk about smart grid we refer to the next generation of power systems that should and will replace existing power system grids through intelligent communication infrastructures, sensing technologies, advanced computing, smart meters, smart appliances, and renewable energy resources. Features of the smart grid must meet requirements as high efficiency, reliability, sustainability, flexibility, and market enabling. But, the growing dependency on information and communication technologies (ICT) with its applications and uses has led to new threats to discuss and to try to resist against them. On the one hand, the most important challenges for smart grid cyber security infrastructure are finding and designing optimum methods to secure communication networks between millions of inter-connected devices and entities throughout critical power facilities, especially by preventing attacks and defending against them with intelligent methods and systems in order to maintain our infrastructures resilient and without affecting their behavior and performances. On the other hand, another main challenge is to incorporate data security measures to the communication infrastructures and security protocols of the smart grid system keeping in mind the complexity of smart grid network and the specific cyber security threats and vulnerabilities. The basic concept of smart grid is to add control, monitoring, analysis, and the feature to communicate to the standard electrical system in order to reduce power consumption while achieving maximized throughput of the system. This technology, currently being developed around the world, will allow to use electricity as economically as possible for business and home user. The smart grid integrates various technical initiatives such as wide-area monitoring protection and control systems (WAMPAC) based on phasor measurement units (PMU), advanced metering infrastructure (AMI), demand response (DR), plug-in hybrid electric vehicles (PHEV), and large-scale renewable integration in the form of wind and solar generation. Therefore, this chapter is focused on two main ideas considering modern smart grid infrastructures. The first idea is focused on high-level security requirements and objectives for the smart grid, and the second idea is about innovative concepts and methods to secure these critical infrastructures. The main challenge in assuring the security of such infrastructures is to obtain a high level of resiliency (immunity from various types of attacks) and to maintain the performances of the protected system. This chapter is organized in seven parts as follows. The first part of this chapter is an introduction in smart grid related to how it was developed in the last decades and what are the issues of smart grid in terms of cyber security. The second part shows the architecture of a smart grid network with all its features and utilities. The third part refers to the cyber security area of smart grid network which involves challenges, requirements, features, and objectives to secure the smart grid. The fourth part of this chapter is about attacks performed against smart grid network that happens because the threats and vulnerabilities existing in the smart grid system. The fifth part refers to the methods and countermeasures used to avoid or to minimize effects of complex attacks. The sixth part of the chapter is dedicated to presenting an innovative methodology for security assessment based on vulnerability scanning and honeypots usage. The last part concludes the chapter and draws some goals for future research directions. The main purposes of this chapter are: to present smart grid network architecture with all its issues, complexities, and features, to explore known and future threats and vulnerabilities of smart grid technology, to show how a highly secured smart grid should look like and how this next generation of power system should act and recover against the increasing complexity of cyber-attacks. <s> BIB003 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Security Challenges <s> Electric grids in the future will be highly integrated with information and communications technology. The increase in use the of information technology is expected to enhance reliability, efficiency, and sustainability of the future electric grid through the implementation of sophisticated monitoring and control strategies. However, it also comes at a price that the grid becomes more vulnerable to cyber-intrusions which may damage the physical system. This chapter provides an overview of cyberattacks on power systems from a system theoretical perspective by focusing on the tight coupling between the physical system and the communication network. It is demonstrated via several attack scenarios how the adversary may cause significant impacts on the power system by intercepting the communication channel and without possibly being detected. The attack strategies and the corresponding countermeasures are formulated and analyzed using tools from optimization, dynamical systems, and control theory. <s> BIB004 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Security Challenges <s> Abstract The increasing energy dependence of cloud data centers to Internet-of-Things necessitates availability of high reliable power systems. Smart grid enables two-way flow of energy from power to plug to be automated, monitored and controlled. However, IP-based communications in smart grids increase the likelihood of network attacks, such as IP spoofing and Distributed Denial of Service (DDoS) attacks. These attacks cause damages such as wrong smart meter readings, false demands for electricity, and impaired protection devices. Thus, there is a need for cyber resilient smart grid communication network. Software Defined Networking (SDN) which has the ability to redefine operations of a network at runtime presents the most resilience benefits when used as an underlying infrastructure for the smart grid. In this paper, we present a framework to assess security risks within an SDN-enabled smart grid communication network. Specifically, we quantify the security risks for DoS attacks on Intelligent Electronic Devices (IEDs) and the IEC 61850 network. Our security score model incorporates the critical role of each IED and measures impact on the overall smart grid network. We illustrate how SDN relieves our smart grid network of congestion and improves timing performance of IEC 61850 type messages, making them time compliant. <s> BIB005 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Security Challenges <s> Abstract Smart cyber-physical systems (CPS) form an integral part of Smart Cities, and are deployed in various domains, such as energy and transportation. In general, a smart CPS senses data from the physical world and uses cyber capabilities to transmit and analyze this data for making intelligent decisions in their respective domains. However, the growing number of cyberattacks against CPSs can badly affect their reputation and wide-scale acceptability. The Ukrainian smart grid attack and Jeep hack are true testimonials for the financial impacts and extensive service disruptions caused by such attacks. They also justify the immediate requirement to secure the smart CPS environment. This chapter explores two popular smart CPSs—smart grids and smart cars. Smart grids play a critical role in the energy management of Smart Cities, and intelligent transportation is achieved using smart cars. After examining the current general architecture of these CPSs, we examine the security in these domains by listing different attacks reported against them and various countermeasures proposed to secure these systems. <s> BIB006 </s> Smart grid in the context of industry 4.0: an overview of communications technologies and challenges <s> Security Challenges <s> Abstract The traditional electricity grid is characterized by one-way communication between the customer and the utility provider, which results in poor load management and wastage of energy. Smart grids support two-way communication between the utility supplier and its customers which allows real-time monitoring and near-instantaneous balance of supply and demand of energy. However, smart grids are vulnerable to security attacks (e.g., distributed denial of service (DDoS), fraud, and privacy) which can have severe consequences. In this chapter, the different security vulnerabilities in smart grids are identified. The impact of consumer data privacy and confidentiality breach is discussed and existing techniques as proposed in literature to protect the privacy of customer information in a smart grid are presented. <s> BIB007
Since smart grid systems are cyber-physical, they are vulnerable to cyber-attacks that can affect physical systems, making any security threat dangerous and highly impactful. Furthermore, there are many economic and political reasons that motivate targeting the smart grid in order to compromise the vital energy market. There is a wide range of threats to the security of the grid. BIB003 notes that different sources classify threats and attacks based on different variables., for example the exploited device (Sensors, Network devices, smart meters, etc.), the system architecture layers they target (communication layer, application layer, etc.), or on the security objective they disrupt (integrity, availability, confidentiality). Some attacks identified by researchers can describe multiple variations, for example Denial of service (DoS) attack can be an umbrella term for Jamming or flooding, at any level of the grid. Figure 3 summarizes the challenges in SGI 4.0 systems and applications. The following is a brief discussion of the main threats to the grid. a. DoS attacks: Denial of service refers to a large number of attacks that cause a system to become overwhelmed and render it incapable of providing services required. This can be particularly threatening to a smart grid system because of its real-time operations. The control messages in the system are time critical and a few seconds delay can compromise all system operations. Advanced DoS attacks are hard to detect because they are disguised as legitimate traffic or are taking over trusted sources that it becomes hard to block all attacks. Furthermore, such attacks can come from different sources so it is impossible for one or several service providers to control. To reduce the impact of such attacks, the development of cyber resilient systems capable of withstanding multiple relentless attacks is important BIB005 . Another mothod that can be used to reduce DDos attacks is IP fast hopping, which disguses the true IP address of a service so that it becomes harder to target by attackers BIB002 . b. Data injection: These attacks occur when a malicious entity alters codes and database entries to disrupt the system. The smart grid is vulnerable to this kind of attack because of the various devices connected to it that provide access points for attackers. Moreover, this kind of attack can have a wide range of effects on power systems, from disrupting the real-time nature of the grid, to committing fraud by manipulating smart meter readings and power pricing BIB001 . Data injection attacks are hard to detect by nature and it's is better to prevent them, using smart security systems with dedicated authentication to limit the probability of these attacks BIB006 . c. Privacy: Smart Grid systems collect a range of user data, such as location, payment information power usage and preferences. This information can be used maliciously to track and harm users. Even information that might seem irrelevant can be used against the customer. For example, a user's power usage trends can help predict what time they leave the house thus allowing thieves to target their house, or an electrical devices company can use preferences of the user to advertise directly to them and gain advantage. That is why protecting users' privacy and information is very important. Multiple methods can be used to preserve privacy in smart grids, such as anonymization of data, masking sources, encryption and aggregating data using various methods that disassociate users from data BIB007 . d. Insider attacks: Insider threats, or attacks committed by anyone with legal access to the system, are dangerous because traditional security measures such as firewalls and passwords cannot stop them from causing damage. Some hiring practices like background check can help limit insider threats. There are also some technological solutions such as anomaly detection systems that can spot any irregular behavior , or the use of authenticated access control via gateway devices and software solutions BIB004 .
Network Policy Languages: A survey and a new approach <s> Inter-Domain Policy Routing <s> We present an architecture for inter-domain policy routing (IDPR). The objective of IDPR is to construct and maintain routes, between source and destination administrative domains, that provide user traffic with the requested services within the constraints stipulated for the domains transited. The IDPR architecture is designed to accommodate an internetwork containing tens of thousands of administrative domains with heterogeneous service requirements and restrictions. <s> BIB001 </s> Network Policy Languages: A survey and a new approach <s> Inter-Domain Policy Routing <s> Route generation and message forwarding in large diverse internetworks is subject to multiple service-related constraints, referred to as policies, imposed both by the service providers (in terms of offered services and restrictions on these services) and by the users (in terms of service requirements). We present an approach to policy routing, called Inter-Domain Policy Routing (IDPR), designed to operate in an internetwork composed of thousands of separately administered networks. The primary objective of IDPR is to provide traffic with routes that satisfy the users' service requirements while respecting the service providers' service restrictions. In this paper, we present an overview of IDPR, concentrating on those aspects of IDPR that make it well suited to policy routing in large internetworks with diversity among users and service providers. <s> BIB002
Steenstrup presents a set of protocols BIB001 and architecture in for Inter-Doman Policy Routing (IDPR). Unlike BGP and IDRP, IDPR uses link state routing to provide policy routing among administrative domains (ADs). 2 The primary objective of IDPR is to provide traffic with routes that satisfy the users' service requirements while respecting the service providers' service restrictions BIB002 . Source policies represent the users' requirements and can consist of parameters such as throughput, acceptable delay, cost of session, and domains to avoid. Service providers specify transit policies, which specify offered services and the conditions of their use. The generation and selection of policy routes is based on distributed routing information and the source policies specified by the domain administrator. IDPR forwards messages across paths established using the policy routes generated. Route generation is inherently complex and the most computationally intensive part of IDPR. The general policy route generation problem involves a combination of service constraints, for example, finding a route delay of no more than S seconds and a cost no greater than C. Most of these multiconstraint routing problems are NP-Complete. To reduce the size of the link state database, IDPR supports the ability to group ADs into superdomains. The existence of superdomains imposes a domain hierarchy within the network. With a hierarchical approach only domain-level information is needed to construct routes. This greatly reduces the information needed to be maintained by a route server. The size of the database will now depend on the number of domains and the policies associated with each. A variant of Clark's policy term was chosen to represent policies in . This variant allows for policies to be associated with a set of network elements that represents a path. A policy based on paths is a great asset to policy-based routing protocols.
Network Policy Languages: A survey and a new approach <s> Analyzing the Consistency of Security Policies <s> We discuss the development of a methodology for reasoning about properties of security policies. We view a security policy as a special case of regulation which specifies what actions some agents are permitted, obliged or forbidden to perform and we formalize a policy by a set of deontic formulae. We first address the problem of checking policy consistency and describe a method for solving it. The second point we are interested in is how to query a policy to know the actual norms which apply to a given situation. In order to provide the user with consistent answers, the normative conflicts which may appear in the policy must be solved. For doing so, we suggest using the notion of roles and define priorities between roles. <s> BIB001
In BIB001 , the development of a methodology for reasoning about properties of security policies is discussed. Chovly and Cuppens view a security policy as a specific case of regulation, where a regulation defines which actions an agent is permitted, obliged, or forbidden to perform. With this methodology a system is made up of agents that can perform some actions on some objects. In analyzing the consistency of security policies, focus is put on the ability to perform consistency checks (e.g., check for conflicting situations) on the system, and to have the ability to query a regulation to know which norms apply in a given situation. Formal logic is used to create an unambiguous representation of security polices. According to BIB001 the advantage of a representation based on formal logic is the ability to precisely define the axioms 7 to reason about a regulation. With policies defined by axioms, tools can now be developed to check the system regulation for consistency. Rather than associating norms (i.e., permissions, obligations, and prohibitions) with individuals, roles are created with these attributes and individuals associated with these roles. The individual inherits the norms associated with a role when the individual is playing that role. A conflict can only exist when an individual is playing different roles at the same time, because of an assumption in their research that norms within a role are conflict-free. To resolve conflicts when an individual is playing multiple roles, an ordering is applied when roles are merged. The order represents a priority between them, and the order is assumed to be total. Tools written in Prolog were developed that checked the consistency of the security policies as well as an algorithm for solving conflicts when an individual is playing different roles at the same time.
Network Policy Languages: A survey and a new approach <s> Conflicts in Policy-Based Distributed Systems Management <s> Discusses the concepts developed within the Domino project on domain management for open systems, and how these concepts are implemented. Domains are a means of grouping objects, distinct from the management policies which are specified in terms of domains. Domains and policies are discussed from the viewpoint of both the manager and the underlying mechanisms which implement them. The emphasis of the user view is on conceptual clarity and the emphasis of the mechanism view is on efficient implementation in distributed systems. Both views need to be implemented, although in some cases there may be a direct correspondence between the two views. The authors argue that keeping managers independent from the domain of objects they manage gives flexibility and simplifies both the user and mechanism views of system management. <s> BIB001 </s> Network Policy Languages: A survey and a new approach <s> Conflicts in Policy-Based Distributed Systems Management <s> The paper advocates a distributed processing approach to managing distributed services whereby managed objects have a management interface to support their management functionality and other interfaces to support their normal functionality. We discuss the shortcomings of the SNMP and OSI approaches to management for large scale distributed services and explain the need for three basic management services-monitoring to obtain information; domains to group objects and partition responsibility; and policy to permit the behaviour of automated managers to be modified without reimplementation. The key message of the paper is that management should not be designed and implemented independently from the normal functionality provided by a service but that standard distributed processing concepts, tools and techniques should be used for management. This approach permits the management system to be used to manage itself. > <s> BIB002 </s> Network Policy Languages: A survey and a new approach <s> Conflicts in Policy-Based Distributed Systems Management <s> Modern distributed systems contain a large number of objects and must be capable of evolving, without shutting down the complete system, to cater for changing requirements. There is a need for distributed, automated management agents whose behavior also has to dynamically change to reflect the evolution of the system being managed. Policies are a means of specifying and influencing management behavior within a distributed system, without coding the behavior into the manager agents. Our approach is aimed at specifying implementable policies, although policies may be initially specified at the organizational level and then refined to implementable actions. We are concerned with two types of policies. Authorization policies specify what activities a manager is permitted or forbidden to do to a set of target objects and are similar to security access-control policies. Obligation policies specify what activities a manager must or must not do to a set of target objects and essentially define the duties of a manager. Conflicts can arise in the set of policies. Conflicts may also arise during the refinement process between the high level goals and the implementable policies. The system may have to cater for conflicts such as exceptions to normal authorization policies. The paper reviews policy conflicts, focusing on the problems of conflict detection and resolution. We discuss the various precedence relationships that can be established between policies in order to allow inconsistent policies to coexist within the system and present a conflict analysis tool which forms part of a role based management framework. Software development and medical environments are used as example scenarios. <s> BIB003
In BIB003 , policies are used as a means to specify the management behavior of a system without coding the behavior into the manager agents. Lupu and Sloman focus on techniques and tool support for offline policy conflict detection and resolution. Two types of policies, authorization and obligation, are addressed in this research. Authorization policy specifies which activities a manger is permitted or forbidden to perform on a set of target objects. Obligation policies specify which activities a manager must or must not do to a set of target objects and essentially defines the duties of a manager. Conflicts can arise in a set of policies, but it is not always desirable to eliminate the conflicts by rewriting the policies or changing the membership of the domains to which policies apply. Since automated managers cannot enforce conflicting policies, Lupu and Sloman suggest that a precedence relationship must be established between polices in order to resolve the conflicts. Four types of policy priority are addressed: • Negative policies always have priority: negative policies take precedence over positive ones. • The assignment of explicit priorities: policy 1 has priority over policy 2, which has priority over policy 3, and so on. • Distance between a policy and the managed objects: priority is given to the policy applying to the closer class in an inheritance hierarchy. For example, a computer science (CS) department is a subclass of a university. If a student is in the CS department, policies of the CS department will override those of the university when a conflict exists. • Specificity related to domain nesting: a particular case of distance between policies, this principle is that a more specific policy (i.e., a policy applying to a subdomain) refers to fewer objects and so overrides more general policies applying to an ancestor domain. Lupu and Sloman developed a prototype conflict detection tool that currently detects overlaps between policies and optionally applies domain-nesting precedence. The function of the detection tool is analogous to compile-time type checking for a programming language in that it reduces runtime errors and detects specification errors. A notation is used to represent policies that are precise and can be analyzed for conflicts using automated tools, but it is not based on a well-known logic. In this system an administrator creates and modifies policies using a policy editor. Checks are made for conflicts, and if necessary policies are modified to remove the conflicts. Sloman has applied the concept of grouping policies by authorization and obligation, which is then interpreted rather than coded into management agents in several other works BIB002 BIB001 .
An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Introduction <s> Abstract Bacillus thuringiensis (Bt) was isolated from a flour moth collected in the German province of Thuringia and described by Berliner in 1915. The same organism had already been described by Ishiwata in 1902 as Bacillus sotto from Japan where it causes a wilt disease of silkworm caterpillars, but the description was not known to Berliner. Bt is now the accepted name for a range of aerobic spore-forming bacteria which form an insect toxic crystal during sporulation. However, many bacteriologists consider Bt to be a variant of Bacillus cereus , a ubiquitous soil-inhabiting bacterium. Since the pioneering work of Steinhaus in California in the early 1950s, there has been considerable commercial interest and products are now sold in most countries of the world for control of caterpillars (var. kurstaki, entomocidus, galleriae and aizawai ), mosquito and blackfly larvae (var. israelensis ) adn beetle larvae (var. tenebrionis and san diego ). <s> BIB001 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Introduction <s> Naturally occurring entomopathogens are important regulatory factors in insect populations. Many species are employed as biological control agents of insect pests in row and glasshouse crops, orchards, ornamentals, range, turf and lawn, stored products, and forestry and for abatement of pest and vector insects of veterinary and medical importance. The comparison of entomopathogens with conventional chemical pesticides is usually solely from the perspective of their efficacy and cost. In addition to efficacy, the advantages of use of microbial control agents are numerous. These include safety for humans and other nontarget organisms, reduction of pesticide residues in food, preservation of other natural enemies, and increased biodiversity in managed ecosystems. As with predators and parasitoids, there are three basic approaches for use of entomopathogens as microbial control agents: classical biological control, augmentation, and conservation. The use of a virus (Oryctes nonoccluded virus), a fungus (Entomophaga maimaiga), and a nematode (Deladenus siricidicola )a s innoculatively applied biological control agents for the long-term suppression of palm rhinoceros beetle (Oryctes rhinoceros), gypsy moth (Lymantria dispar), and woodwasp (Sirex noctilio), respectively, has been successful. Most examples of microbial control involve inundative application of entomopathogens. The most widely used microbial control agent is the bacterium Bacillus thuringiensis. The discovery of new varieties with activity against Lepidoptera, Coleoptera, and Diptera and their genetic improvement has enhanced the utility of this species. Recent developments in its molecular biology, mode of action, and resistance management are reviewed. Examples of the use, benefits, and limitations of entomopathogenic viruses, bacteria, fungi, nematodes, and protozoa as inundatively applied microbial control agents are presented. Microbial control agents can be effective and serve as alternatives to broad-spectrum chemical insecticides. However, their increased utilization will require (1) increased pathogen virulence and speed of kill; (2) improved pathogen performance under challenging environmental conditions (cool weather, dry conditions, etc.); (3) greater efficiency in their production; (4) improvements in formulation that enable ease of application, increased environmental persistence, and longer shelf life; (5) better understanding of how they will fit into integrated systems and their interaction with the environment and other integrated pest management (IPM) components; (6) greater appreciation of their environmental advantages; and (7) acceptance by growers and the general public. We envision a broader appreciation for the attributes of entomopathogens in the near to distant future and expect to see synergistic combinations of microbial control agents with other technologies. However, if future development is only market driven, there will be considerable delays in the implementation of several microbial control agents that have excellent potential for use in IPM programs. <s> BIB002 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Introduction <s> Bacillus thuringiensis is a bacterium of great agronomic and scientific interest. Together the subspecies of this bacterium colonize and kill a large variety of host insects and even nematodes, but each strain does so with a high degree of specificity. This is mainly determined by the arsenal of crystal proteins that the bacterium produces during sporulation. Here we describe the properties of these toxin proteins and the current knowledge of the basis for their specificity. Assessment of phylogenetic relationships of the three domains of the active toxin and experimental results indicate how sequence divergence in combination with domain swapping by homologous recombination might have caused this extensive range of specificities. <s> BIB003 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Introduction <s> In 125 years since MetchnikoV proposed the use of Metarhizium anisopliae to control the wheat cockchafer and brought about the Wrst Weld trials, microbial control has progressed from the application of naturalists’ observations to biotechnology and precision delivery. This review highlights major milestones in its evolution and presents a perspective on its current direction. Fungal pathogens, the most eye-catching agents, dominated the early period, but major mycological control eVorts for chinch bugs and citrus pests in the US had questionable success, and interest waned. The discoveries of Bacillus popilliae and Bacillus thuringiensis began the era of practical and commercially viable microbial control. A program to control the Japanese beetle in the US led to the discovery of both B. popilliae and Steinernema glaseri, the Wrst nematode used as a microbial control agent. Viral insect control became practical in the latter half of the 20th century, and the Wrst registration was obtained with the Heliothis nuclear polyhedrosis virus in 1975. Now strategies are shifting for microbial control. While Bt transgenic crops are now planted on millions of hectares, the successes of more narrowly deWned microbial control are mainly in small niches. Commercial enthusiasm for traditional microbial control agents has been unsteady in recent years. The prospects of microbial insecticide use on vast areas of major crops are now viewed more realistically. Regulatory constraints, activist resistance, benign and eYcacious chemicals, and limited research funding all drive changes in focus. Emphasis is shifting to monitoring, conservation, integration with chemical pesticides, and selection of favorable venues such as organic agriculture and countries that have low costs, mild regulatory climates, modest chemical inputs, and small scale farming. Published by Elsevier Inc. <s> BIB004
Biological pesticide is one of the most promising alternatives over conventional chemical pesticides, which offers less or no harm to the environments and biota. Bacillus thuringiensis (commonly known as Bt) is an insecticidal Gram-positive spore-forming bacterium producing crystalline proteins called delta-endotoxins (δ-endotoxin) during its stationary phase or senescence of its growth. Bt was originally discovered from diseased silkworm (Bombyx mori) by Shigetane Ishiwatari in 1902. But it was formally characterized by Ernst Berliner from diseased flour moth caterpillars (Ephestia kuhniella) in 1915 BIB001 . The first record of its application to control insects was in Hungary at the end of 1920, and in Yugoslavia at the beginning of 1930s, it was applied to control the European corn borer BIB004 . Bt, the leading biorational pesticide was initially characterized as an insect pathogen, and its insecticidal activity was ascribed largely or completely to the parasporal crystals. It is active against more than 150 species of insect pests. Bt is normally marketed (as a mixture of dried spores and toxin crystals) under various trade names worldwide for controlling many plant pests, mainly caterpillars belonging to Lepidoptera (represented by butterflies and moths), mosquito larvae and a few others including unconventional targets like mites. The share of Bt products in agrochemical (fungicide, herbicide and insecticide) market is about only 1%. The first commercial Bt product was produced in 1938 by Libec in France, but the product was used only for a very short time due to World War II, and then in the USA in the 1950s . The toxicity of Bt culture lies in its ability to produce the crystalline protein, this observation led to the development of bioinsecticides based on Bt for the control of certain insect species among the orders Lepidoptera, Diptera, and Coleoptera BIB002 BIB003 . Nowadays, Bt isolates are reported also active against certain nematodes, mites and protozoa . It is already a useful alternative or supplement to synthetic chemical pesticide for applications in commercial agriculture, forest management, and mosquito control, and also a key source of genes for transgenic expression to transfer pest resistance in plants. Due to this economic interest, numerous approaches have been developed to enhance the production of Bt bioinsecticides. The insecticidal activity of Bt is known to depend not only on the activity of the bacterial culture itself, but also on abiotic factors, such as the medium composition and cultivation strategy.
An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bt Toxins <s> Mutants resistant to oxytetracyclin, erythromycin and neomycin but not to streptomycin, often shown, in the absence of antibiotic, alterations in sporulation: slight or pronounced temperature-sensitive character between 30 and 37 degrees C, slight thermoresistance of refractive spores formed at 30 degrees C, oligosporogenic character at 30 degrees C. <s> BIB001 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bt Toxins <s> A Bacillus subtilis plasmid capable of producing phenotypic erythromycin resistance was compared with an Eryr staphylococcal plasmid. The two plasmids did not interfere with the sporulation process in B. subtilis, in contrast to chromosomal erythromycin mutations. <s> BIB002 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bt Toxins <s> Abstract The complete nucleotide sequence of a small (2.055 kb) plasmid pHD2 from Bacillus thuringiensis var. kurstaki strain HD1-DIPEL was obtained. The sequence encoded two open reading frames (ORFs) which corresponded to polypeptides of M r , 26,447 and 9122. Comparison of the sequence with those obtained for other plasmids from Gram-positive organisms suggested that pHD2 may belong to the extensive and highly interrelated family of plasmids exhibiting replication via a ssDNA intermediate; a putative nick site was proposed on the basis of sequence homology and one ORF exhibited distant homology with the site-specific topoisomerases encoded by the pT181 family of staphylococcal plasmids, while the other ORF exhibited considerable similarity to a small polypeptide (RepA) encoded by plasmid pLS1. Constructs consisting of pHD2, pBR322, and the chloramphenicol resistance gene from pC194 were capable of stable maintenance in B. thuringiensis var. israelensis , but were subject to apparently specific deletions in the heterologous host. The same constructs could not be established in Bacillus subtilis . <s> BIB003 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bt Toxins <s> Members of the genusBacillusare widely used as sources of industrial enzymes, fine biochemicals, antibiotics, and insecti- cides (for a review, see reference 27). One of these species, Bacillus thuringiensis, accounts for more than 90% of the bio- pesticides used today (for recent reviews on B. thuringiensis and its toxins, see references 6, 33, and 38). The entomopatho- genicpropertiesofthisbacteriumaredueatleastinparttothe production of d-endotoxins that make up the crystalline inclu- sions characteristic of B. thuringiensis strains. In 1989, Hofte and Whiteley proposed a classification for d-endotoxins (30). They distinguished four major classes of d-endotoxins (CryI, -II, -III and -IV) and cytolysins (Cyt), found in the crystals of the mosquitocidal strains, on the basis of their insecticidal and molecular properties. The d-endotoxins belonging to each of these classes were grouped in subclasses (A, B, C. . . and a, b, c. . .)accordingtosequence.Generally,theseproteinsaretoxic for lepidoptera (CryI), both lepidoptera and diptera (CryII), coleoptera (CryIII), and diptera (CryIV). These various insecticidal proteins are synthesized during the stationary phase and accumulate in the mother cell as a crystal inclusion which can account for up to 25% of the dry weight of the sporulated cells (Fig. 1). The amount of crystal protein produced by a B. thuringiensis culture in laboratory conditions (about 0.5 mg of protein per ml) and the size of the crystals (24) indicate that each cell has to synthesize 10 6 to 2 3 10 6 d-endotoxin molecules during the stationary phase to form a crystal. This is a massive production of protein and presum- ably occupies a large proportion of the cell machinery. Never- theless, sporulation and the associated physiological changes proceed in parallel with d-endotoxin production. The aim of this minireview is to analyze the various mechanisms by which B. thuringiensis accumulates large quantities of toxins as bio- logically active protein crystals. <s> BIB004 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bt Toxins <s> A two-step procedure was used to place a cryIC crystal protein gene from Bacillus thuringiensis subsp. aizawai into the chromosomes of two B. thuringiensis subsp. kurstaki strains containing multiple crystal protein genes. The B. thuringiensis aizawai cryIC gene, which encodes an insecticidal protein highly specific to Spodoptera exigua (beet armyworm), has not been found in any B. thuringiensis subsp. kurstaki strains. The cryIC gene was cloned into an integration vector which contained a B. thuringiensis chromosomal fragment encoding a phosphatidylinositol-specific phospholipase C, allowing the B. thuringiensis subsp. aizawai cryIC to be targeted to the homologous region of the B. thuringiensis subsp. kurstaki chromosome. First, to minimize the possibility of homologous recombination between cryIC and the resident crystal protein genes, B. thuringiensis subsp. kurstaki HD73, which contained only one crystal gene, was chosen as a recipient and transformed by electroporation. Second, a generalized transducing bacteriophage, CP-51, was used to transfer the integrated cryIC gene from HD73 to two other B. thuringiensis subsp. kurstaki stains. The integrated cryIC gene was expressed at a significant level in all three host strains, and the expression of cryIC did not appear to reduce the expression of the endogenous crystal protein genes. Because of the newly acquired ability to produce the CryIC protein, the recombinant strains showed a higher level of activity against S. exigua than did the parent strains. This two-step procedure should therefore be generally useful for the introduction of an additional crystal protein gene into B. thuringiensis strains which have multiple crystal protein genes and which show a low level of transformation efficiency. <s> BIB005 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bt Toxins <s> ▪ Abstract Gram-positive spore-forming entomopathogenic bacteria can utilize a large variety of protein toxins to help them invade, infect, and finally kill their hosts, through their action on the insect midgut. These toxins belong to a number of homology groups containing a diversity of protein structures and modes of action. In many cases, the toxins consist of unique folds or novel combinations of domains having known protein folds. Some of the toxins display a similar structure and mode of action to certain toxins of mammalian pathogens, suggesting a common evolutionary origin. Most of these toxins are produced in large amounts during sporulation and have the remarkable feature that they are localized in parasporal crystals. Localization of multiple toxin-encoding genes on plasmids together with mobilizable elements enables bacteria to shuffle their armory of toxins. Recombination between toxin genes and sequence divergence has resulted in a wide range of host specificities. <s> BIB006 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bt Toxins <s> Bacillus thuringiensis Crystal (Cry) and Cytolitic (Cyt) protein families are a diverse group of proteins with activity against insects of different orders--Lepidoptera, Coleoptera, Diptera and also against other invertebrates such as nematodes. Their primary action is to lyse midgut epithelial cells by inserting into the target membrane and forming pores. Among this group of proteins, members of the 3-Domain Cry family are used worldwide for insect control, and their mode of action has been characterized in some detail. Phylogenetic analyses established that the diversity of the 3-Domain Cry family evolved by the independent evolution of the three domains and by swapping of domain III among toxins. Like other pore-forming toxins (PFT) that affect mammals, Cry toxins interact with specific receptors located on the host cell surface and are activated by host proteases following receptor binding resulting in the formation of a pre-pore oligomeric structure that is insertion competent. In contrast, Cyt toxins directly interact with membrane lipids and insert into the membrane. Recent evidence suggests that Cyt synergize or overcome resistance to mosquitocidal-Cry proteins by functioning as a Cry-membrane bound receptor. In this review we summarize recent findings on the mode of action of Cry and Cyt toxins, and compare them to the mode of action of other bacterial PFT. Also, we discuss their use in the control of agricultural insect pests and insect vectors of human diseases. <s> BIB007 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bt Toxins <s> We have developed a strategy for isolating cry genes from Bacillus thuringiensis. The key steps are the construction of a DNA library in an acrystalliferous B. thuringiensis host strain and screening for the formation of crystal through optical microscopy observation and sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) analyses. By this method, three cry genes—cry55Aa1, cry6Aa2, and cry5Ba2—were cloned from rice-shaped crystals, producing B. thuringiensis YBT-1518, which consists of 54- and 45-kDa crystal proteins. cry55Aa1 encoded a 45-kDa protein, cry6Aa2 encoded a 54-kDa protein, and cry5Ba2 remained cryptic in strain YBT-1518, as shown by SDS-PAGE or microscopic observation. Proteins encoded by these three genes are all toxic to the root knot nematode Meloidogyne hapla. The two genes cry55Aa1 and cry6Aa2 were found to be located on a plasmid with a rather small size of 17.7 kb, designated pBMB0228. <s> BIB008 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bt Toxins <s> Bacillus thuringiensis (Bt) bacteria are insect pathogens that rely on insecticidal pore forming proteins known as Cry and Cyt toxins to kill their insect larval hosts. At least four different non-structurally related families of proteins form the Cry toxin group of toxins. The expression of certain Cry toxins in transgenic crops has contributed to an efficient control of insect pests resulting in a significant reduction in chemical insecticide use. The mode of action of the three domain Cry toxin family involves sequential interaction of these toxins with several insect midgut proteins facilitating the formation of a pre-pore oligomer structure and subsequent membrane insertion that leads to the killing of midgut insect cells by osmotic shock. In this manuscript we review recent progress in understanding the mode of action of this family of proteins in lepidopteran, dipteran and coleopteran insects. Interestingly, similar Cry-binding proteins have been identified in the three insect orders, as cadherin, aminopeptidase-N and alkaline phosphatase suggesting a conserved mode of action. Also, recent data on insect responses to Cry toxin attack is discussed. Finally, we review the different Bt based products, including transgenic crops, that are currently used in agriculture. <s> BIB009 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bt Toxins <s> During the past decade the pesticidal bacterium Bacillus thuringiensis has been the subject of intensive research. These efforts have yielded considerable data about the complex relationships between the structure, mechanism of action, and genetics of the organism’s pesticidal crystal proteins, and a coherent picture of these relationships is beginning to emerge. Other studies have focused on the ecological role of the B. thuringiensis crystal proteins, their performance in agricultural and other natural settings, and the evolution of resistance mechanisms in target pests. Armed with this knowledge base and with the tools of modern biotechnology, researchers are now reporting promising results in engineering more-useful toxins and formulations, in creating transgenic plants that express pesticidal activity, and in constructing integrated management strategies to insure that these products are utilized with maximum efficiency and benefit. <s> BIB010
Bt produces one or more types of parasporal crystalline proteins (called δ-endotoxins) concomitantly with sporulation. Cryastal (Cry) or cytolytic (Cyt) proteins singly or in their combination constitute the δ-endotoxins . Cry proteins are parasporal crystalline inclusions produced by Bt that exhibit experimentally verifiable toxic effect to a target organism or have significant sequence similarity to a known Cry protein. Cyt proteins are also parasporal inclusions exhibiting hemolytic (cytolitic) activity with obvious sequence similarity to a known Cyt protein. These toxins are highly specific to their target insect, but innocuous to humans, vertebrates and plants, and are completely biodegradable BIB007 . These crystalline proteins are mainly encoded by extra-chromosomal genes located on the plasmids. The parasporal crystalline proteins produced during the stationary (senescence) phase of its growth cycle account for 20% -30% of the dry weight of the cells of this phase BIB004 . Expression of most Cry genes (e.g., cry1Aa, cry 2A, cry 4A, etc.) are well regulated in the sporulation phase of growth. Studies have shown that several Cry proteins-when expressed in either E. coli or B. subtilis-expressed as 130 to 140 kDa protoxin complex molecules that retain their biological activity. More than 200 types of endotoxin gene have been cloned from various strains of Bt, and sequenced so far. The plasmid profiles of most Bt strains are rather complex, with molecular weight varying from 2 to 200 kb and the number of plasmids ranging from 1 to 10 in most strains BIB003 . The self-assembly of these 130 kDa proteins is spontaneous, mediated primarily by the C-terminus of the protein. Their cysteine-rich carboxyl terminus is highly conserved among lepidopteran-specific Cry proteins, which generates a number of disulfide bridges that allow good crystal packing and also protects the toxin from the attack of various proteases. Commercial insecticides derived from Bt have a long history of successful use in the biocontrol of insect pests BIB009 . Many studies examined the composition and methods of preparation of nutrient media for entomopathogenic bacteria . Chromsomal insertion of Cry gene may enhance the production of δ-endotoxins in Bt strains BIB005 . Erythromycin resistance may affect the sporulation processes in Bt and B. subtilis BIB001 BIB002 . Most Bacillus strains produce a mixture of structurally different insecticidal crystal proteins (Cry proteins), which are encoded by different Cry genes which target different insect orders ( Table 1) . Each of these proteins may contribute to the insecticidal spectrum of a strain that makes it selectively toxic to a wide variety of insects belonging to the Lepidoptera, Coleoptera, Diptera, Hymenoptera and Mallophaga, as well as to other invertebrates BIB010 BIB006 . Bt strains are able to produce exoenzymes, such as proteases and α-amylases . Apart from δ-endotoxin, some isolates of Bt produce another class of insecticidal small molecules called β-exotoxin, the common name for which is thuringiensis BIB008 . Beta-exotoxin and the other Bacillus toxins (δ-endotoxins) may contribute to the general insecticidal toxicity of the bacterium to lepidopteran, dipteran, and coleopteran insects. Beta-exotoxin is known to be toxic to humans and almost all other forms of life and, in fact, its presence is prohibited in Bt products . Engineering of plants to contain and express only the genes for δ-endotoxins avoids the problem of assessing the risks posed by these other toxins that may be produced in microbial preparations [24].
An Overview on the Crystal Toxins from Bacillus thuringiensis <s> General Structure of Cry Toxin <s> Pore-forming proteins or peptides (PFP) have now been isolated from a wide array of species ranging from humans to bacteria. A great number of these toxins lyse cells through a 'barrel-stave' mechanism, in which monomers of the toxin bind to and insert into the target membrane and then aggregate like barrel staves surrounding a central, water-filled pore. An evaluation of the secondary structures suggest that common secondary structures may be employed by most of these toxic PFP. <s> BIB001 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> General Structure of Cry Toxin <s> Abstract The Cry1A insecticidal crystal protein (protoxin) from six subspecies of Bacillus thuringiensis as well as the Cry1Aa, Cry1Ab, and Cry1Ac proteins cloned in Escherichia coli was found to contain 20-kilobase pair DNA. Only the N-terminal toxic moiety of the protoxin was found to interact with the DNA. Analysis of the crystal gave approximately 3 base pairs of DNA per molecule of protoxin, indicating that only a small region of the N-terminal toxic moiety interacts with the DNA. It is proposed that the DNA-protoxin complex is virus-like in structure with a central DNA core surrounded by protein interacting with the DNA with the peripheral ends of the C-terminal region extending outward. It is shown that this structure accounts for the unusual proteolysis observed in the generation of toxin in which it appears that peptides are removed by obligatory sequential cleavages starting from the C terminus of the protoxin. Activation of the protoxin by spruce budworm (Choristoneura fumiferana) gut juice is shown to proceed through intermediates consisting of protein-DNA complexes. Larval trypsin initially converts the 20-kilobase pair DNA-protoxin complex to a 20-kilobase pair DNA-toxin complex, which is subsequently converted to a 100-base pair DNA-toxin complex by a gut nuclease and ultimately to the DNA-free toxin. <s> BIB002 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> General Structure of Cry Toxin <s> Bacillus thuringiensis Crystal (Cry) and Cytolitic (Cyt) protein families are a diverse group of proteins with activity against insects of different orders--Lepidoptera, Coleoptera, Diptera and also against other invertebrates such as nematodes. Their primary action is to lyse midgut epithelial cells by inserting into the target membrane and forming pores. Among this group of proteins, members of the 3-Domain Cry family are used worldwide for insect control, and their mode of action has been characterized in some detail. Phylogenetic analyses established that the diversity of the 3-Domain Cry family evolved by the independent evolution of the three domains and by swapping of domain III among toxins. Like other pore-forming toxins (PFT) that affect mammals, Cry toxins interact with specific receptors located on the host cell surface and are activated by host proteases following receptor binding resulting in the formation of a pre-pore oligomeric structure that is insertion competent. In contrast, Cyt toxins directly interact with membrane lipids and insert into the membrane. Recent evidence suggests that Cyt synergize or overcome resistance to mosquitocidal-Cry proteins by functioning as a Cry-membrane bound receptor. In this review we summarize recent findings on the mode of action of Cry and Cyt toxins, and compare them to the mode of action of other bacterial PFT. Also, we discuss their use in the control of agricultural insect pests and insect vectors of human diseases. <s> BIB003 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> General Structure of Cry Toxin <s> Cry5Ba is a δ-endotoxin produced by Bacillus thuringiensis PS86A1 NRRL B-18900. It is active against nematodes and has great potential for nematode control. Here, we predict the first theoretical model of the three-dimensional (3D) structure of a Cry5Ba toxin by homology modeling on the structure of the Cry1Aa toxin, which is specific to Lepidopteran insects. Cry5Ba resembles the previously reported Cry1Aa toxin structure in that they share a common 3D structure with three domains, but there are some distinctions, with the main differences being located in the loops of domain I. Cry5Ba exhibits a changeable extending conformation structure, and this special structure may also be involved in pore-forming and specificity determination. A fuller understanding of the 3D structure will be helpful in the design of mutagenesis experiments aimed at improving toxicity, and lead to a deep understanding of the mechanism of action of nematicidal toxins. <s> BIB004 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> General Structure of Cry Toxin <s> This study was aimed at the large scale production and application of Bacillus thuringiensis ( Bt ) biopesticide in Bangladesh agriculture from locally available cheap raw materials. B. thuringiensis subsp. kurstaki HD-73 (reference strain) and two other indigenous isolates of B. thuringiensis namely Bt-Soil-47 and Bt-Insect-1i demonstrated satisfactory growth of sporulation and endotoxin production in a medium prepared from de-fatted mustard-seed meal (oil cake) as carbon and nitrogen sources at 30°C. A correlation of growth, sporulation and endotoxin production pattern was obtained through the systematic study over the period of 72 h. Time course study of the growth of all three Bt isolates demonstrated similar pattern; however, spore-crystal complex formation of the indigenous Bt isolates was different with respect to the reference strain. The two indigenous isolates formed the maximum sporecrystal complex at 36 h, whereas the reference strain did the same at 66 h. Hence, the productivity of endotoxin formation of the indigenous isolates, 34.30×10 -3 and 37.50times;10 -3 g/L/h respectively, were higher than that of the reference strain (21.37×10 -3 g/L/h). Spore-crystal complex of the bacilli was recovered as dry powder which can be applied suitably in field to test their insecticidal activity. Molecular size of endotoxin of the isolates analyzed by SDS-PAGE resembled the typical sizes of the δ-endotoxin of Bacillus thuringiensis. Keywords : Bacillus thuringiensis; spore-crystal complex; δ-endotoxin. DOI: http://dx.doi.org/10.3329/bjm.v27i2.9172 BJM 2010; 27(2): 51-55 <s> BIB005
The major component of crystals toxic to lepidopteron larvae is a 130 kDa protein (protoxin), which upon cleavage in the insect yields the functional (insecticidal) proteins of lower molecular weight; very often the crystal formed is an assemblage of many proteins . A Bt isolate (Soil-47) showed distinct bands of 32.1 and 34.6 kDa. The band corresponding to 32.1 kDa protein could arise from the type Cry1 and/or Cry 4 gene, while the other (34.6 kDa) protein is possibly encoded by the type Cyt gene BIB005 . An unexpected finding was that a 20 kb heterologous DNA fragment was found intimately associated with the crystals from Btk HD73, The DNA is not susceptible to nuclease attack unless the protoxin is removed or proteolyzed to toxin. The active toxin is not associated with DNA; however, evidence was obtained which indicated that the DNA was involved in the generation of toxin from the crystal protein. BIB002 . Structure determination of Bt toxins remains one of the most important tools in understanding and improving the utility of these proteins. Crystal structure of Cry III A has been published first and several others are now available. Xia et al. BIB004 predicted the first theoretical model of the three dimensional (3D) structure of a Cry (Cry 5Ba) toxin by homology modeling on the structure of the Cry1Aa toxin, which is specific to Lepidopteran insects. The three-domain structure of CryIIIA consisted of the following: an α-helical barrel (domain I) which shows some resemblance to membrane-active or sporeforming domains of other toxins BIB001 ; a triangular prism of "Greek key" beta sheets (domain II); and a β-sheet jelly-roll fold (domain III) . Members of this 3-domain Cry family are used worldwide for insect control, and their mode of action has been characterized in some details BIB003 .
An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Mode of Action <s> A classification for crystal protein genes of Bacillus thuringiensis is presented. Criteria used are the insecticidal spectra and the amino acid sequences of the encoded proteins. Fourteen genes are distinguished, encoding proteins active against either Lepidoptera (cryI), Lepidoptera and Diptera (cryII), Coleoptera (cryIII), or Diptera (cryIV). One gene, cytA, encodes a general cytolytic protein and shows no structural similarities with the other genes. Toxicity studies with single purified proteins demonstrated that every described crystal protein is characterized by a highly specific, and sometimes very restricted, insect host spectrum. Comparison of the deduced amino acid sequences reveals sequence elements which are conserved for Cry proteins. The expression of crystal protein genes is affected by a number of factors. Recently, two distinct sigma subunits regulating transcription during different stages of sporulation have been identified, as well as a protein regulating the expression of a crystal protein at a posttranslational level. Studies on the biochemical mechanisms of toxicity suggest that B. thuringiensis crystal proteins induce the formation of pores in membranes of susceptible cells. In vitro binding studies with radiolabeled toxins demonstrated a strong correlation between the specificity of B. thuringiensis toxins and the interaction with specific binding sites on the insect midgut epithelium. The expression of B. thuringiensis crystal proteins in plant-associated microorganisms and in transgenic plants has been reported. These approaches are potentially powerful strategies for the protection of agriculturally important crops against insect damage. <s> BIB001 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Mode of Action <s> Abstract ::: We compared two insecticidal and eight noninsecticidal soil isolates of Bacillus thuringiensis with regard to the solubility of their proteinaceous crystals at alkaline pH values. The protein disulfide contents of the insecticidal and noninsecticidal crystals were equivalent. However, six of the noninsecticidal crystals were soluble only at pH values of ≥12. This lack of solubility contributed to their lack of toxicity. One crystal type which was soluble only at pH ≥12 (strain SHP 1-12) did exhibit significant toxicity to tobacco hornworm larvae when the crystals were presolubilized. In contrast, freshly prepared crystals from the highly insecticidal strain HD-1 were solubilized at pH 9.5 to 10.5, but when these crystals were denatured, by either 8 M urea or autoclave temperatures, they became nontoxic and were soluble only at pH values of ≥12. These changes in toxicity and solubility occurred even though the denatured HD-1 crystals were morphologically indistinguishable from native crystals. Our data are consistent with the view that insecticidal crystals contain distorted, destabilized disulfide bonds which allow them to be solubilized at pH values (9.5 to 10.5) characteristic of lepidopteran and dipteran larval midguts. <s> BIB002 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Mode of Action <s> The Cry3A delta-endotoxin protein inclusion synthesized by Bacillus thuringiensis subsp. tenebrionis is soluble in alkaline and acid buffer solutions but the toxin precipitates when returned to neutral pH conditions. The midgut pH of susceptible beetle larvae is neutral to slightly acidic, a pH environment in which the Cry3A toxin is insoluble. To investigate this paradox we studied the Cry3A toxin after various proteolytic treatments. In many cases the toxin was cleaved into polypeptides that remained associated under non-denaturing conditions. Interestingly a chymotrypsinized Cry3A product was soluble under neutral pH conditions, retained full activity against susceptible beetle larvae, and exhibited specific binding to Leptinotarsa decemlineata midgut membranes. <s> BIB003 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Mode of Action <s> During sporulation, Bacillus thuringiensis produces crystalline inclusions comprised of a mixture of delta-endotoxins. Following ingestion by insect larvae, these inclusion proteins are solubilized, and the protoxins are converted to toxins. These bind specifically to receptors on the surfaces of midgut apical cells and are then incorporated into the membrane to form ion channels. The steps required for toxin insertion into the membrane and possible oligomerization to form a channel have been examined. When bound to vesicles from the midguts of Manduca sexta larvae, the Cry1Ac toxin was largely resistant to digestion with protease K. Only about 60 amino acids were removed from the Cry1Ac amino terminus, which included primarily helix alpha1. Following incubation of the Cry1Ab or Cry1Ac toxins with vesicles, the preparations were solubilized by relatively mild conditions, and the toxin antigens were analyzed by immunoblotting. In both cases, most of the toxin formed a large, antigenic aggregate of ca. 200 kDa. These toxin aggregates did not include the toxin receptor aminopeptidase N, but interactions with other vesicle components were not excluded. No oligomerization occurred when inactive toxins with mutations in amphipathic helices (alpha5) and known to insert into the membrane were tested. Active toxins with other mutations in this helix did form oligomers. There was one exception; a very active helix alpha5 mutant toxin bound very well to membranes, but no oligomers were detected. Toxins with mutations in the loop connecting helices alpha2 and alpha3, which affected the irreversible binding to vesicles, also did not oligomerize. There was a greater extent of oligomerization of the Cry1Ac toxin with vesicles from the Heliothis virescens midgut than with those from the M. sexta midgut, which correlated with observed differences in toxicity. Tight binding of virtually the entire toxin molecule to the membrane and the subsequent oligomerization are both important steps in toxicity. <s> BIB004 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Mode of Action <s> Bacillus thuringiensis Crystal (Cry) and Cytolitic (Cyt) protein families are a diverse group of proteins with activity against insects of different orders--Lepidoptera, Coleoptera, Diptera and also against other invertebrates such as nematodes. Their primary action is to lyse midgut epithelial cells by inserting into the target membrane and forming pores. Among this group of proteins, members of the 3-Domain Cry family are used worldwide for insect control, and their mode of action has been characterized in some detail. Phylogenetic analyses established that the diversity of the 3-Domain Cry family evolved by the independent evolution of the three domains and by swapping of domain III among toxins. Like other pore-forming toxins (PFT) that affect mammals, Cry toxins interact with specific receptors located on the host cell surface and are activated by host proteases following receptor binding resulting in the formation of a pre-pore oligomeric structure that is insertion competent. In contrast, Cyt toxins directly interact with membrane lipids and insert into the membrane. Recent evidence suggests that Cyt synergize or overcome resistance to mosquitocidal-Cry proteins by functioning as a Cry-membrane bound receptor. In this review we summarize recent findings on the mode of action of Cry and Cyt toxins, and compare them to the mode of action of other bacterial PFT. Also, we discuss their use in the control of agricultural insect pests and insect vectors of human diseases. <s> BIB005 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Mode of Action <s> Since its discovery as a microbial insecticide, Bacillus thuringiensis has been widely used to control insect pests important in agriculture, forestry, and medicine. The wide variety of formulations based on spore-crystal complexes intended for ingestion by target insects, are the result of many years of research. The development of a great variety of matrices for support of the spore-crystal complex enables many improvements, such as an increase in toxic activity, higher palatability to insects, or longer shelf lives. These matrices use many chemical, vegetable or animal compounds to foster contact between crystals and insect midguts, without harming humans or the environment. Biotechnology companies are tasked with the production of these kinds of bioinsecticides. These companies must not only provide formulations tailored to specific crops and the insect pests, but they must also search for and produce bioinsecticides based on new strains of high potency, whether wild or genetically improved. It is expected that new products will appear on the market soon, providing an increased activity spectrum and applicability to many other pest-impacted crops. These products may help develop a more organic agriculture. This review article discusses recent patents related to bioinsecticides. <s> BIB006 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Mode of Action <s> Bacillus thuringiensis (Bt) Cry toxins constitute the active ingredient in the most widely used biological insecticides and insect-resistant transgenic crops. A clear understanding of their mode of action is necessary for improving these products and ensuring their continued use. Accordingly, a long history of intensive research has established that their toxic effect is due primarily to their ability to form pores in the plasma membrane of the midgut epithelial cells of susceptible insects. In recent years, a rather elaborate model involving the sequential binding of the toxins to different membrane receptors has been developed to describe the events leading to membrane insertion and pore formation. However, it was also proposed recently that, in contradiction with this mechanism, Bt toxins function by activating certain intracellular signaling pathways which lead to the necrotic death of their target cells without the need for pore formation. Because work in this field has largely focused, for several years, on the elaboration and promotion of these two models, the present revue examines in detail the experimental evidence on which they are based. It is concluded that the presently available information still supports the notion that Bt Cry toxins act by forming pores, but most events leading to their formation, following binding of the activated toxins to their receptors, remain relatively poorly understood. <s> BIB007
Mode of action of δ-endotoxin involves several events that must be completed several hours after the ingestion in order to lead to insect death. Following ingestion of the inactive protoxin, the crystals are solubilized by the alkaline conditions in the insect midgut and are subsequently proteolytically converted into a toxic core fragment BIB001 . This activated toxin binds to receptors located on the apical microvillus membranes of epithelial midgut cells. For Cry1A toxins, at least four different binding sites have been described in different lepidopteran insects: a cadherin-like protein (CADR), a glycosylphosphatidylinositol (GPI)-anchored aminopeptidase-N (APN), a GPIanchored alkaline phosphatase (ALP) and a 270 kDa glycoconjugate BIB006 . Cry toxins interact with specific receptors located on the host cell surface and are activated by host proteases following receptor binding, which would result is in the formation of a pre-pore oligomeric structure that is insertion competent. In contrast, Cyt toxins directly interact with membrane lipids and insert into the membrane. Recent evidence suggests that Cyt synergizes or overcomes resistance (for instance, to mosquitocidal-Cry proteins) by functioning as a Cry-membrane bound receptor BIB005 . Once activated, the endotoxin binds to the gut epithelium and causes cell lysis by the formation of cationselective channels, which leads to death. The activated region of the δ-endotoxin is composed of three distinct structural domains: an N-terminal helical bundle domain involved in membrane insertion and pore formation; a beta-sheet central domain involved in receptor binding; and a C-terminal beta-sandwich domain that interacts with the N-terminal domain to form a channel. After binding, toxin adopts a conformation suitable for allowing its insertion into the cell membrane. Subsequently, oligomerization occurs, and this oligomer forms a pore or ion channel induced by an increase in cationic permeability within the functional receptors contained on the brush borders membranes. This allows the free flux of ions and liquids, causing disruption of membrane transport and cell lysis leading to insect death BIB001 BIB004 . The complete nature of this process is still elusive. Differences in the extent of solubilization sometimes explain differences in the degree of toxicity among Cry proteins BIB002 . A reduction in solubility is speculated to be one potential mechanism for insect resistance [68] . Cry3A protein may be necessary for the solubilization of toxins in the midgut of insects BIB003 . Most recently, two models were proposed for the action of crystal proteins i.e., the sequential binding model and signaling pathway model BIB007 .
An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Target Pests <s> Commercial biotechnology solutions for controlling lepidopteran and coleopteran insect pests on crops depend on the expression of Bacillus thuringiensis insecticidal proteins, most of which permeabilize the membranes of gut epithelial cells of susceptible insects. However, insect control strategies involving a different mode of action would be valuable for managing the emergence of insect resistance. Toward this end, we demonstrate that ingestion of double-stranded (ds)RNAs supplied in an artificial diet triggers RNA interference in several coleopteran species, most notably the western corn rootworm (WCR) Diabrotica virgifera virgifera LeConte. This may result in larval stunting and mortality. Transgenic corn plants engineered to express WCR dsRNAs show a significant reduction in WCR feeding damage in a growth chamber assay, suggesting that the RNAi pathway can be exploited to control insect pests via in planta expression of a dsRNA. <s> BIB001 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Target Pests <s> Since its discovery as a microbial insecticide, Bacillus thuringiensis has been widely used to control insect pests important in agriculture, forestry, and medicine. The wide variety of formulations based on spore-crystal complexes intended for ingestion by target insects, are the result of many years of research. The development of a great variety of matrices for support of the spore-crystal complex enables many improvements, such as an increase in toxic activity, higher palatability to insects, or longer shelf lives. These matrices use many chemical, vegetable or animal compounds to foster contact between crystals and insect midguts, without harming humans or the environment. Biotechnology companies are tasked with the production of these kinds of bioinsecticides. These companies must not only provide formulations tailored to specific crops and the insect pests, but they must also search for and produce bioinsecticides based on new strains of high potency, whether wild or genetically improved. It is expected that new products will appear on the market soon, providing an increased activity spectrum and applicability to many other pest-impacted crops. These products may help develop a more organic agriculture. This review article discusses recent patents related to bioinsecticides. <s> BIB002
It is well documented that many insects are susceptible to the toxic activity of Bt; of them, lepidopterans have exceptionally been well studied, and many toxins have shown activity against them BIB002 . The order Lepidoptera encompasses majority of susceptible species belonging to agriculturally important families such as Cossidae, Ge-lechiidae, Lymantriidae, Noctuidae, Pieridae, Pyralidae, Thaumetopoetidae, Tortricidae, and Yponomeutidae . A novel crystal proteins exhibiting insecticidal activity against lepidopterans has been reported from Bt strains BIB001 . Dipterans are also important target pests, and many of them are highly susceptible to Bt. Discovery of novel strains of Bt containing parasporal crystal proteins having pesticidal properties against whiteflies, aphids, leaf hoopers, and possibly other sucking insects of agronomic importance extended the potential applications of this bacterium. However, the novel toxic activities found in these novel strains are not limited only to insects, as some of them produce crystals with activity against nematodes, protozoans, flukes, collembolans, mites and worms, among others BIB002 .
An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Production Media and Media Formulations <s> A response-surface methodology was used to study the effect of carbon:nitrogen ratio (C:N) and initial concentration of total solids (CTS) on insecticidal crystal protein production and final spore count. Bacillus thuringiensis var. kurstaki HD-73 was grown in a stirred-tank reactor using soybean meal, glucose, yeast extract, corn steep solids and mineral salts. Soybean meal and glucose were added according to a central composite experimental design to test C:N ratios ranging from 3:1 to 11:1 and CTS levels from 60␣g/l to 150 g/l. Cry production was quantified using sodium dodecyl sulfate/polyacrylamide gel electrophoresis. The response-surface model, adjusted to the data, indicated that media with a C:N of 7:1 yielded the highest relative Cry production at each CTS. The spore count was higher at low C:N ratio (4:1) and high CTS (near 150 g/l). Specific Cry production varied from 0.6 to 2.2 g Cry/1010 spores. A 2.5-fold increase in CTS resulted in a six-fold increase of protoxin production at a 7:1 C:N ratio. It is concluded that the best production conditions for Cry and for spores are different and optimization of B. thuringiensis processes should not be done on a spore-count basis but on the amount of Cry synthesized. <s> BIB001 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Production Media and Media Formulations <s> Solid-state fermentation (SSF) has developed in eastern countries over many centuries, and has enjoyed broad application in these regions to date. By contrast, in western countries the technique had to compete with classical submerged fermentation and, because of the increasing pressure of rationalisation and standardisation, it has been widely superseded by classical submerged fermentation since the 1940s. This is mainly because of problems in engineering that appear when scaling up this technique. However, there are several advantages of SSF, for example high productivities, extended stability of products and low production costs, which say much about such an intensive biotechnological application. With increasing progress and application of rational methods in engineering, SSF will achieve higher levels in standardisation and reproducibility in the future. This can make SSF the preferred technique for special fields of application such as the production of enzymes and food. <s> BIB002 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Production Media and Media Formulations <s> Abstract Starch industry wastewater (SWW), slaughterhouse wastewater (SHWW) and secondary sludges from three different wastewater treatment plants (Jonquiere—JQS, Communaute Urbaine de Quebec—CUQS and Black lake—BLS) were used as raw materials for the production of Bacillus thuringiensis (Bt) based biopesticides in a pilot scale fermentor (100 L working volume). The slaughterhouse wastewater exhibited the lowest Bt growth and entomotoxcity (Tx) potential (measured against spruce budworm) due to low availability of carbon, nitrogen and other nutrients. Performance variation (growth, sporulation, proteolytic activity and Tx potential) within the three types of sludges was directly related to the availability of nitrogen and carbohydrates, which could change with sludge origin and methods employed for its generation. The Tx potential of Bt obtained in different secondary sludges (JQS: 12 × 10 9 SBU/L; CUQS: 13 × 10 9 SBU/L and BLS: 16 × 10 9 SBU/L) and SWW (18 × 10 9 SBU/L) was higher than the soybean based synthetic medium (10 × 10 9 SBU/L). The maximum protease activity was obtained in CUQ secondary sludge (4.1 IU/mL) due to its high complex protein concentration. Nevertheless, high carbohydrate concentration in SWW repressed enzyme production. The secondary sludges and SWW were found to be suitable raw materials for high potency Bt biopesticide production. <s> BIB003 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Production Media and Media Formulations <s> Abstract Response surface optimization of Bacillus thuringiensis subsp. israelensis HD 500 fermentation for the production of its toxin proteins Cry4Ba and Cry11Aa was performed. Since the interaction of the medium components as well as cultivation conditions were expected to influence the production of the toxin proteins, an experimental chart was prepared by accepting the previously reported optimal values for the most important parameters as zero points: [Mn], 10 −6 M; [K 2 HPO 4 ], 50 mM; C:N ratio, 20:1 and incubation temperature; 30 °C. When the combinations of these variables at different levels were studied at 30 batch cultures and analysed for the optimum toxin protein concentrations: temperature, 28.3 °C; [Mn], 3.3 × 10 −7 M; C:N ratio, 22.2 and [K 2 HPO 4 ], 66.1 mM yielded the highest concentrations of both Cry4Ba and Cry11Aa toxin proteins. <s> BIB004 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Production Media and Media Formulations <s> To develop a cost-effective process for the production of Bacillus thuringiensis-based insecticide, it is important to cultivate the bacterial strain in rich medium to obtain the highest yields of spore-crystal complexes. It was found that cultivation of the bacterium in medium with high concentrations of glucose (50–90 g l−1) resulted in much lower bacterial spores, crystal protein and lower toxicity, when tested against Spodoptera littoralis and Anagasta kuehniella larvae. The best results were obtained with glucose concentration of 20.0 g 1−1 as 7.1 × 1011 spores ml−1 and 3.4 g l−1 of crystal protein were achieved with LC50 of 40.1 and 50.2 mg kg−1 meal against S. littoralis and A. kuehniella, respectively. However, >21% of the consumed glucose was diverted into by-product synthesis at the expense of spore-crystal protein mixture. Only 78.3% of consumed glucose was converted into spores and crystal protein. Among by-products formed, acetic acid and β-hydroxybutyric acid (PHB) were produced during the p... <s> BIB005 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Production Media and Media Formulations <s> Bacillus thuringiensis (Bt) is a Gram-positive bacterium naturally found in soil, water and grain dust, and can be cultivated in liquid, solid and semi-solid media. The objective of this work was to test different media to grow B. thuringiensis. The seed culture (strain 344, B. thuringiensis tolworthi, belonging to Embrapa Maize and Sorghum Microorganism Bank) was produced using shake flasks and grown in LB medium plus salts during 18 hours, incubated on a rotary shaker at 200 revolutions per minute (rpm) at 30°C for 96 hours. Medium 1 was composed of: Luria Bertani (LB) plus salts (FeSO 4 , ZnSO 4 , MnSO 4 , MgSO 4 ), and 0.2% glucose; medium 2 was composed of 1.5% glucose, 0.5% soybean flour plus salts; and medium 3 was composed of liquid swine manure at 4% and 0.2% glucose. All three media were sterilized and inoculated with B. thuringiensis tolwothi (seed culture) at a stirrer speed of 200rpm, for 96 hours at 30°C. The pH was measured at regular intervals, viable spores were counted as c.f.u/mL, cell mass expressed in g/L- lyophilized, and spore counting per mL of medium. All three media showed pH variation during the fermentation process. Media 1 and 2 showed a tendency to shift toward a basic pH and medium 3 to an acidic pH. Media 1 and 2 showed the highest number of viable spores, 2.0 x 10 8 c.f.u/mL, within the 96 hours of incubation, however medium 2 showed a biomass dry weight of 1.18g/L. During the fermentation period, medium 1 showed the highest spore concentration, 1.4 x 10 9 spores/mL after 96h of fermentation. Efficiency against S. frugiperda first instar larvae showed that all Bt produced in all three media killed above 60% in the highest concentrations. <s> BIB006
Indeed, large quantities of spores with high insecticidal activity are required for practical applications. This means that while handling Bt as bioinsecticide, a high spore count is not sufficient to ensure toxicity, but it is necessary to reach high δ-endotoxin titers. One of the most underreported aspects of Bt is that of the production and formulation, although there are certain work existed in connection with Bt growth on several synthetic or complex media . There are several formulations of media proposed by different authors. Our group explored the efficacy of various raw agricultural products as supplement to LB for enhancing the toxin production, and found potato flour as an efficient supplement to commercial Luria-Bertani (LB) medium . To develop a costeffective process for the production of Bt-based insecticide, it is imperative to cultivate the bacterial strain in a nutrient rich medium to obtain the highest yields of spore-crystal complexes. Conventionally, Bt-crystals are being produced employing submerged or liquid fermentation (SmF) techniques, but recently many workers use nutrient-rich waste water or sludge from various treatment plants as the medium for the production of Bt-toxin BIB003 . Solid-state fermentation: Solid-state fermentation (SSF) has been developed in eastern countries over many centuries, and has enjoyed broad application in these regions to date [75] . The term SSF denotes cultivation of microorganisms on solid, moist substrates in the absence of a free aqueous phase (water). There are several advantages for SSF; for example, high productivities, extended stability of products and low production costs, which say much about such an intensive biotechnological application. With increasing progress and application of rational methods in engineering, SSF will reach higher levels regarding standardization and reproducibility in future. This can make SSF as the preferred technique in the special fields of application such as the productions of enzymes and secondary metabolites, especially foods and pharmaceuticals BIB002 . Different production media and media compositions can change either the relative toxicity against several target insects or the insecticidal potency of products obtained from the same Bt strains BIB004 . According to Farrera et al., BIB001 , media with different composition showed changes in crystal production, i.e. different amounts of Cry proteins produced per spore would vary. The ingredients in the media affect the rate and synthesis of the different δ-endotoxins and also the size of the crystals produced. Using barley as the carbon source, Amin BIB005 developed a cost-effectively protocol for the mass production of Bt. Several media based on complex substrates such as corn steep liquor , peptones blackstrap molasses and Great Northern White Bean concentrate , or LB supplemented with agricultural products have been found efficient for Bt bioinsecticide production. Various investigators modified such commercial media by supplementing it with mineral nutrients or various salts, i.e., enriched medium. Zouari et al. showed that Bt subspecies kurstaki produced 1 g/L of δ-endotoxin in 4.5 g/L total dry biomass in a complex liquid medium, in which the sugar was replaced by gruel hydrolysate. A mixture of extracts from potato and Bengal gram or bird feather and de-oiled rice bran or wheat bran, chickpea husk and corncob was used to cultivate Bt israelensis and found that the mosquitocidal activity of the crude toxin was higher than that produced in the conventional medium . Valicente et al. BIB006 used LB medium supplemented with various salts, and agricultural by-products like soybean flour (0.5%) and liquid swine manure (4%) to increase Bt biopesticide production by SmF, which resulted in 1.18 g/L dry cell mass. Zhuang et al. also claimed that they have purified δ-endotoxin (up to 7.14 mg/g medium) by one step centrifugation from wastewater sludge-based medium, however they did not provide any physical evidence for the purified crystals. From these reports, it seems that maximum yield of Bt toxin could be attained is 3.6 g/L BIB006 in SmF or 7.14 g/Kg medium in SSF , where they did not provide the actual cost effect.
An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bioassay <s> Parasporal inclusion proteins from a total of 1744 Bacillus thuringiensis strains, consisting of 1700 Japanese isolates and 44 reference type strains of existing H serovars, were screened for cytocidal activity against human leukaemia T cells and haemolytic activity against sheep erythrocytes. Of 1684 B. thuringiensis strains having no haemolytic activity, 42 exhibited in vitro cytotoxicity against leukaemia T cells. These non-haemolytic but leukaemia cell-toxic strains belonged to several H-serovars including dakota, neoleonensis, shandongiensis, coreanensis and other unidentified serogroups. Purified parasporal inclusions of the three selected strains, designated 84-HS-1-11, 89-T-26-17 and 90-F-45-14, exhibited no haemolytic activity and no insecticidal activity against dipteran and lepidopteran insects, but were highly cytocidal against leukaemia T cells and other human cancer cells, showing different toxicity spectra and varied activity levels. Furthermore, the proteins from 84-HS-1-11 and 89-T-26-17 were able to discriminate between leukaemia and normal T cells, specifically killing the former cells. These findings may lead to the use of B. thuringiensis inclusion proteins for medical purposes. <s> BIB001 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bioassay <s> The protein toxins produced by Bacillus thuringiensis (Bt) are the most widely used natural insecticides in agriculture. Despite successful and extensive use of these toxins in transgenic crops, little is known about toxicity and resistance pathways in target insects since these organisms are not ideal for molecular genetic studies. To address this limitation and to investigate the potential use of these toxins to control parasitic nematodes, we are studying Bt toxin action and resistance in Caenorhabditis elegans. We demonstrate for the first time that a single Bt toxin can target a nematode. When fed Bt toxin, C. elegans hermaphrodites undergo extensive damage to the gut, a decrease in fertility, and death, consistent with toxin effects in insects. We have screened for and isolated 10 recessive mutants that resist the toxin's effects on the intestine, on fertility, and on viability. These mutants define five genes, indicating that more components are required for Bt toxicity than previously known. We find that a second, unrelated nematicidal Bt toxin may utilize a different toxicity pathway. Our data indicate that C. elegans can be used to undertake detailed molecular genetic analysis of Bt toxin pathways and that Bt toxins hold promise as nematicides. <s> BIB002 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bioassay <s> Laboratory tests were conducted to establish the relative toxicity of Bacillus thuringiensis (Bt) toxins and pollen from Bt corn to monarch larvae. Toxins tested included Cry1Ab, Cry1Ac, Cry9C, and Cry1F. Three methods were used: (i) purified toxins incorporated into artificial diet, (ii) pollen collected from Bt corn hybrids applied directly to milkweed leaf discs, and (iii) Bt pollen contaminated with corn tassel material applied directly to milkweed leaf discs. Bioassays of purified Bt toxins indicate that Cry9C and Cry1F proteins are relatively nontoxic to monarch first instars, whereas first instars are sensitive to Cry1Ab and Cry1Ac proteins. Older instars were 12 to 23 times less susceptible to Cry1Ab toxin compared with first instars. Pollen bioassays suggest that pollen contaminants, an artifact of pollen processing, can dramatically influence larval survival and weight gains and produce spurious results. The only transgenic corn pollen that consistently affected monarch larvae was from Cry1Ab event 176 hybrids, currently <2% corn planted and for which re-registration has not been applied. Results from the other types of Bt corn suggest that pollen from the Cry1Ab (events Bt11 and Mon810) and Cry1F, and experimental Cry9C hybrids, will have no acute effects on monarch butterfly larvae in field settings. <s> BIB003 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bioassay <s> Bacillus thuringiensis (Bt) is a grampositive, spore forming bacterium, which is principally distinguishedfrom other bacilli by the production of large, insecticidal,protein crystals (Insecticidal Crystal Proteins, or ICPs). Theseproteins are usually thought to act only on the actively feedinglarvae of susceptible species by a mechanism which involvesconsumption and proteolytic processing of the protein followed bybinding to, and lysis of, midgut epithelial cells. However, few authorshave reported Bt toxicity to adult insects. In the followingpaper, we expand on previous reports of toxicity to adult insects andpresent data which demonstrate that: (1) proteolytically activated ICPssignificantly reduce the lifespans of adult Heliothis virescensand Spodoptera exigua at concentrations of 500 μg/ml, butnot 167 or 25 μg/ml, (2) individual activated ICPs are differentiallytoxic to adult H. virescens and S. exigua, and (3)adult S. exigua are sensitive to Cry1C protoxin at aconcentration of 1 mg/ml. <s> BIB004 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bioassay <s> Eight Bacillus thuringiensis strains were used to test their activity against Plasmodium berghei. When crystal proteins extracted from strains 007, 017, 020, 021, 030, 032, and 037 were injected into plasmodium-infected mice through the tail vein at a rate of 0.45-1.5 mg per mouse, the lengths of survival for the mice were extended up to 5 days (from 8.5 days to 13.5-15 days). Blood-cell staining demonstrated that normal erythrocytes were lightly stained and regularly shaped while the erythrocytes from plasmodia-infected mice swelled, lost shape and even lysed. This means that the crystal proteins could protect erythrocytes from the plasmodium's attack. Proteins analysis revealed that most of the proteins are homologues of classic crystal proteins, with the exception of the 120-kDa protein of strain 020, a surface-layer protein. This study suggested a novel way to control plasmodial infections and even malaria. <s> BIB005 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bioassay <s> Abstract Genetically modified crops, which produce pesticidal proteins from Bacillus thuringiensis, release the toxins into soils through root exudates and upon decomposition of crop residues. Although the phenomena of gene transfer and emergence of resistance have been well documented, the fate of these toxins in soil has not yet been clearly elucidated. The aim of this study was to elucidate the adsorption and the desorbability of the Cry1Aa Bt insecticidal protein in contact with two sodium-saturated clays: montmorillonite and kaolinite. Because the toxin is released into soil in small quantities, it was assumed that it will be in a monomeric state in solution until it oligomerized on cell membranes. The originality of this study was to focus on the monomeric form of the protein. Specific sample conditions were required to avoid polymerisation. A pH above 6.5 and an ionic strength of at least 150 mM (NaCl) were necessary to keep the protein in solution and in a monomeric state. The adsorption isotherms obtained were of the L-type (low affinity) for both clays and fitted the Langmuir equation. The adsorption maximum of the toxin, calculated by the Langmuir nonlinear regression, decreased with increasing pH from 6.5, which was close to the isoelectric point, to 9. At pH 6.5, the calculated adsorption was 1.7 g g−1 on montmorillonite and 0.04 g g−1 on kaolinite. Desorbability measurements showed that a small fraction of toxin could be desorbed by water (up to 14%) and more by alkaline pH buffers (36 ± 7%), indicating that it was not tightly bound. Numerous surfactants were evaluated and the toxin was found to be easily desorbed from both clays when using zwitterionic and nonionic surfactants such as CHAPS, Triton-X-100, and Tween 20. This finding has important implications for the optimization of detection methods for Bt toxin in soil. <s> BIB006 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bioassay <s> Bacillus thuringiensis (Bt) toxins present a potential for control of pest mites. Information concerning the effect of Bt and its possible application to the biocontrol of synathropic mites is rare. The toxic effect of Bacillus thuringiensis var. tenebrionis producing Cry3A toxin was tested on the mites Acarus siro L., Tyrophagus putrescentiae (Schrank), Dermatophagoides farinae Hughes, and Lepidoglyphus destructor (Schrank) via feeding tests. Fifty mites were reared on Bt additive diets in concentrations that ranged from 0 to 100 mg g−1 under optimal conditions for their development. After 21 days, the mites were counted and the final populations were analyzed using a polynomial regression model. The Bt diet suppressed population growth of the four mite species. The fitted doses of Bt for 50% suppression of population growth were diets ranging from 25 to 38 mg g−1. There were no remarkable differences among species. Possible applications of Bt for the control of synanthropic mites are discussed. <s> BIB007 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bioassay <s> A strain of Bacillus causing disease in caterpillars of Arctornis submarginata, a defoliator of tea crop, was isolated from Darjeeling terai region. The strain showed positive reaction in lysine decarboxylase, ornithin decarboxylase, Voges-Proskaur, citrate utilization, nitrate reduction and in utilization of trehalose and glucose; difference with Btk was observed in ONPG test, and in utilization of citrate, arabinose, xylose, cellobios, melibiose and saccharose. The doubling time was 84 min, which is exactly the double of that of Btk. Difference was not evident in protein profile of the strain with that of Btk. The LC 50 value was found to be 398.1 µg/ml with fiducial lower limit 353.06µg/ml and UL 443.14 µg/ml. The LC 50 value of the new strain was lower than that of Btk, which was found to be [537.0 µg/ml; LL 483.63µg/ml and UL, 590.37µg/ml. The LT 50 values of the new strain were also lower than that of Btk. These values were, 7.28 days for 1000 µg/ml and 8.88 days for 750 µg/ml as compared to the LT 50 values 7.57 days for 1000 µg/ml and 9.5 days for 750 µg/ml of Btk. This findings opened up the possibility of developing new strain as microbial pesticide after standardizing its formulations and determining its safety aspects. <s> BIB008 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bioassay <s> Samples of the Bacillus thuringiensis (Bt) were collected from soil and insects. Eight isolates were selected from rural soil, 15 from urban soil and 11 from insects. These were evaluated for entomopathogenicity against larvae of Anticarsia gemmatalis and Culex quinquefasciatus. The pathogenicity tests showed that a higher percentage of isolates were active against A. gemmatalis (60%) compared to C. quinquefasciatus (31%). Probit analysis (LC₅₀) indicated that against A. gemmatalis four of the isolates presented values similar to the reference strain against A. gemmatalis, while against C. quinquefasciatus one isolate showed an LC₅₀ similar to the reference strain (IPS-82). SDS-PAGE characterisation of two isolates showed a 27 kDa protein fraction related to the Bt subspecies israelensis cytolytic toxin (cyt) gene. One 130 kDa protein, possibly related to the Bt crystal inclusions (cry1) gene, was identified in the other two isolates, which were more toxic for lepidoptera; another isolate presented a protein of 100 kDa. Some new local Bt isolates had similar LC50 probit values to the reference strains. <s> BIB009 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Bioassay <s> During the past decade the pesticidal bacterium Bacillus thuringiensis has been the subject of intensive research. These efforts have yielded considerable data about the complex relationships between the structure, mechanism of action, and genetics of the organism’s pesticidal crystal proteins, and a coherent picture of these relationships is beginning to emerge. Other studies have focused on the ecological role of the B. thuringiensis crystal proteins, their performance in agricultural and other natural settings, and the evolution of resistance mechanisms in target pests. Armed with this knowledge base and with the tools of modern biotechnology, researchers are now reporting promising results in engineering more-useful toxins and formulations, in creating transgenic plants that express pesticidal activity, and in constructing integrated management strategies to insure that these products are utilized with maximum efficiency and benefit. <s> BIB010
Well-designed studies under confined conditions required to understand the effect of Bt toxins on different organisms. It is considered that Bt toxins also to be toxic to lepidopeterous, coleopeterous and dipterous insects in addition to mites, nematodes, protozoa and flukes BIB010 . These proteins are usually thought to act only on the actively feeding larvae of susceptible species by a mechanism involving consumption and proteolytic processing of the protein followed by binding to, and the lysis of midgut epithelial cells. It was found that proteolytically activated insecticidal crystal proteins significantly reduced the lifespan of adult Heliothis virescens and Spodoptera exigua at concentrations of 500 μg/ml, but not 167 or 25 μg/ml at their assay conditions BIB004 . Bt crystal proteins showed in vitro cytotoxicity against human cancer cells and leukemic T cells BIB001 . Interestingly, Xu et al. BIB005 demonstrated that the Bt crystal proteins can protect plasmodium-infected mice from malaria. Moreover, non-conventional targets such as Caenorhabditis elegans (nematode) has been demonstrated for the first time BIB002 . Toxins of Btk strain HD1 have widely been used to control the forest pests such as gypsy moth, spruce bud worm, the pine procesionary moth, the European pine shoot moth and the nun moth . Direct feeding of crude pellet containing Bt-toxin , pollen diet formulation [92] are the normal mode of applications being practiced in entomotoxicity assays. A different feeding strategy was successfully used for the bioassay of A. guerreronis, in which the dried solid fermented powder directly brushed on the infested coconut buttons . Many authors used surfactants like BIT (1,2-benzisothiazolin-3-one), one of the inertingredients in Foray 48 B(a Btk formulation); the siloxane (organosilicone) Triton-X-100, Tween 20 and Latron CS-7 are some surfactants for Btk formulations BIB006 . The mortality rate of Thaumetopoea solitaria on the application of Btk toxin has been demonstrated by Er et al. . Purified Btk toxin inhibited the growth of monarch larvae, but did not cause mortality BIB003 . The LC 50 value of Btk was found to be 398.1 μg/ml against caterpillars of Arctornis submarginata BIB008 . Toxicity of several formulations of Btk to beet armyworm (Spodoptera exigua) was determined using neonate larvae in a diet incorporation bioassay. Probit analysis (LC 50 ) has been used by many authors for ascertaining the efficacy of various Bt formulations. For instance, Yashodha and Kuppusamy successfully used dipping method for testing the efficacy of Btk formulation in Tween 20 on Brinjal. Gobatto et al. BIB009 used various concentrations of spore suspension of Bt for estimating the probit value on mosquito and a moth. Payne et al. employed artificial feeding assay for Two-spotted spider mite (T. urticae), a related mite to E. orientalis with different feeding regime. They fed the mite with 5 mg spray-dried powder of Bt broth (a mixture of pores, crystals, cellular debris) in 1 ml sucrose (10%) containing preservatives and surfactant. Possible use of Bt preparation (Dipel 2X) as a substitute for chemical insecticides (Lannate and Hostathion) was evaluated against two major pests of potato crop, Agrotis sp. and Spodoptera exigua. The toxicity studies of Bt to four instars larvae of diamondback moth, Plutella xylostella (L.) suggested that Bt could be an important agent for the control of larval instars of Plutella xylostella. . The Bt diet suppressed the growth of the four mite species such as Acarus siro L., Tyrophagus putrescentiae (Schrank), Dermatophagoides farinae Hughes, and Lepidoglyphus destructor (Schrank) via feeding tests BIB007 .
An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Resistance to Bt Toxins <s> Insecticides derived from the common soil bacterium Bacillus thuringiensis (Bt) are becoming increasingly important for pest management. Insecticidal crystal proteins (also called 8-endotoxins) from Bt are extremely toxic to certain pests, yet cause little or no hann to humans, most beneficial insects, and other nontarget organisms (19, 40). After proteolytic activation in the insect midgut, Bt toxins bind to the brush border membrane of the midgut epithelium and create pores that cause cells to swell and lyse (55). Technical innovations, including expression of Bt toxin genes in transgenic crop plants and transgenic bacteria, should increase the usefulness of Bt (6, 12, 35, 46, 91, 94, 109, 116). At the same time, mounting concerns about environmental hazards and widespread resistance in pest populations are reducing the value of conventional synthetic insecticides. Because Bt had been used commercially for more than two decades without reports of substantial resistance development in open field popu­ lations, some scientists had presumed that evolution of resistance was unlikely (21, 89). However, resistance to Bt was documented recently in field populations of diamondback moth in Hawaii, the continental US, and Asia (36, 42, 92a, 128, 129, 131, 138, 146, 148, 151). These reports confirmed suspicions raised by the results of laboratory selection for resistance to Bt in several major pests (102, 104, 135). Scientists in industry, government, and academia now recognize evolution of resistance to Bt in pests as the greatest threat to the continued success of Bt (18, 44, 58, 59, lOla, 108). To delay or reverse resistance to Bt in pests, we must first understand <s> BIB001 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Resistance to Bt Toxins <s> We compared fitness components of a Bacillus thuringiensis CryIIIA 8-endotoxin resistant and susceptible strains of Colorado potato beetle, Leptinotarsa decemlineata (Say).The resistant strain had been selected for 35 generations, and the resistance ratio was >700-fold compared with the susceptible strain. The F36 of both strains was used in this study. We found that tile viability of eggs produced by resistant and susceptible females were high (97.8 ± 3.7 and 98.3 ± 2.9%, respectively), but eggs of tile resistant strain tended to have longer viability viability than that of the susceptible strain (5.7 ±0.3 and 5.2 ± 0.2 d, respectively). Slower larval development was associated with resistance to CryIIIA 8-endotoxin in Colorado potato beetle. In addition, resistant females produced 60% fewer eggs than susceptible females. The resistant females also exhibited a shorter oviposition period and fewer eggs per egg mass (16.3 ± 2.4 versus 27.8 ± 11.3). These results are discussed together with various resistance management strategies for Colorado potato beetle controlled by conventional B. thringiensis sprays and transgenic plants. <s> BIB002 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Resistance to Bt Toxins <s> Environmentally benign insecticides derived from the soil bacterium Bacillus thuringiensis (Bt) are the most widely used biopesticides, but their success will be short-lived if pests quickly adapt to them. The risk of evolution of resistance by pests has increased, because transgenic crops producing insecticidal proteins from Bt are being grown commercially. Efforts to delay resistance with two or more Bt toxins assume that independent mutations are required to counter each toxin. Moreover, it generally is assumed that resistance alleles are rare in susceptible populations. We tested these assumptions by conducting single-pair crosses with diamondback moth (Plutella xylostella), the first insect known to have evolved resistance to Bt in open field populations. An autosomal recessive gene conferred extremely high resistance to four Bt toxins (Cry1Aa, Cry1Ab, Cry1Ac, and Cry1F). The finding that 21% of the individuals from a susceptible strain were heterozygous for the multiple-toxin resistance gene implies that the resistance allele frequency was 10 times higher than the most widely cited estimate of the upper limit for the initial frequency of resistance alleles in susceptible populations. These findings suggest that pests may evolve resistance to some groups of toxins much faster than previously expected. <s> BIB003 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Resistance to Bt Toxins <s> A population (SERD3) of the diamondback moth (Plutella xylostella L.) with field-evolved resistance to Bacillus thuringiensis subsp. kurstaki HD-1 (Dipel) and B. thuringiensis subsp. aizawai (Florbac) was collected. Laboratory-based selection of two subpopulations of SERD3 with B. thuringiensis subsp. kurstaki (Btk-Sel) or B. thuringiensis subsp. aizawai (Bta-Sel) increased resistance to the selecting agent with little apparent cross-resistance. This result suggested the presence of independent resistance mechanisms. Reversal of resistance to B. thuringiensis subsp. kurstaki and B. thuringiensis subsp. aizawai was observed in the unselected SERD3 subpopulation. Binding to midgut brush border membrane vesicles was examined for insecticidal crystal proteins specific to B. thuringiensis subsp. kurstaki (Cry1Ac), B. thuringiensis subsp. aizawai (Cry1Ca), or both (Cry1Aa and Cry1Ab). In the unselected SERD3 subpopulation (ca. 50- and 30-fold resistance to B. thuringiensis subsp. kurstaki and B. thuringiensis subsp. aizawai), specific binding of Cry1Aa, Cry1Ac, and Cry1Ca was similar to that for a susceptible population (ROTH), but binding of Cry1Ab was minimal. The Btk-Sel (ca. 600-and 60-fold resistance to B. thuringiensis subsp. kurstaki and B. thuringiensis subsp. aizawai) and Bta-Sel (ca. 80-and 300-fold resistance to B. thuringiensis subsp. kurstaki and B. thuringiensis subsp. aizawai) subpopulations also showed reduced binding to Cry1Ab. Binding of Cry1Ca was not affected in the Bta-Sel subpopulation. The results suggest that reduced binding of Cry1Ab can partly explain resistance to B. thuringiensis subsp. kurstaki and B. thuringiensis subsp. aizawai. However, the binding of Cry1Aa, Cry1Ac, and Cry1Ca and the lack of cross-resistance between the Btk-Sel and Bta-Sel subpopulations also suggest that additional resistance mechanisms are present. <s> BIB004 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Resistance to Bt Toxins <s> During the past decade the pesticidal bacterium Bacillus thuringiensis has been the subject of intensive research. These efforts have yielded considerable data about the complex relationships between the structure, mechanism of action, and genetics of the organism’s pesticidal crystal proteins, and a coherent picture of these relationships is beginning to emerge. Other studies have focused on the ecological role of the B. thuringiensis crystal proteins, their performance in agricultural and other natural settings, and the evolution of resistance mechanisms in target pests. Armed with this knowledge base and with the tools of modern biotechnology, researchers are now reporting promising results in engineering more-useful toxins and formulations, in creating transgenic plants that express pesticidal activity, and in constructing integrated management strategies to insure that these products are utilized with maximum efficiency and benefit. <s> BIB005 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Resistance to Bt Toxins <s> Evolution of resistance in pests can reduce the effectiveness of insecticidal proteins from Bacillus thuringiensis (Bt) produced by transgenic crops. We analyzed results of 77 studies from five continents reporting field monitoring data for resistance to Bt crops, empirical evaluation of factors affecting resistance or both. Although most pest populations remained susceptible, reduced efficacy of Bt crops caused by field-evolved resistance has been reported now for some populations of 5 of 13 major pest species examined, compared with resistant populations of only one pest species in 2005. Field outcomes support theoretical predictions that factors delaying resistance include recessive inheritance of resistance, low initial frequency of resistance alleles, abundant refuges of non-Bt host plants and two-toxin Bt crops deployed separately from one-toxin Bt crops. The results imply that proactive evaluation of the inheritance and initial frequency of resistance are useful for predicting the risk of resistance and improving strategies to sustain the effectiveness of Bt crops. <s> BIB006
Laboratory-selected strains: In the past, it was believed that insects would not develop resistance to Bt toxins, since Bt and insects have coevolved. Starting in the mid1980s, however, a number of insect populations of several different species with different levels of resistance to Bt crystal proteins were obtained by laboratory selection experiments, using either laboratory-adapted insects or insects collected from wild populations BIB001 . Examples of laboratory-selected insects resistant to individual Cry toxins include the Indian mealmoth (Plodia interpunctella), the almond moth (Cadra cautella), the Colorado potato beetle (Leptinotarsa decemlineata), the cotton leafworm (Spodoptera littoralis), the beet armyworm (S. exigua), etc. BIB005 . Given the multiple steps in processing the crystal to an active toxin, it is not surprising that insect populations might develop various means of resisting intoxication. It is important, however, to keep in mind that selection in the laboratory may be very different from selection that occurs in the field. Insect populations maintained in the laboratory presumably have a considerably lower level of genetic diversity than field populations. Several laboratory experiments to select for Bt resistance in diamondback moths failed, although the diamondback moth is the only known insect reported so far to have developed resistance to Bt in the field BIB005 . It is possible that the genetic diversity of the starting populations was too narrow and thus did not include resistance alleles. In the laboratory, insect populations are genetically isolated; dilution of resistance by mating with susceptible insects-as observed in field populations-is excluded BIB005 . In addition, the natural environment may contain factors affecting the viability or fecundity of resistant insects, i.e., factors excluded from the controlled environment of the laboratory. Resistance mechanisms can be associated with certain fitness costs that can be deleterious under natural conditions BIB002 . Natural enemies, such as predators and parasites can influence the development of resistance to Bt by preferring either the intoxicated, susceptible or the healthy resistant insects. In the former case, one would expect an increase in resistance development, while in the latter, natural enemies can help to retard resistance development to Bt. Never-theless, selection experiments in the laboratory are valuable because they reveal possible resistance mechanisms and make genetic studies of resistance possible. Field-selected strains: The first case of field-selected resistance to Bt was reported from Hawaii, where populations of diamondback moth showed different levels of susceptibility to a formulated Bt product (Dipel). Populations from heavily treated areas proved more resistant than those populations treated at lower levels, with the highest level of resistance at 30-fold BIB001 . The resistance trait is conferred largely by a single autosomal recessive locus BIB003 . This "Hawaii" resistance allele simultaneously confers cross-resistance to Cry1Aa, Cry1Ab, Cry1Ac, Cry1Fa, and Cry1Ja but not to Cry1Ba, Cry1Bb, Cry1Ca, Cry1Da, Cry1Ia, or Cry2Aa (369). At least one Cry1A-resistant diamondback moth strain has been shown to be very susceptible to Cry9C . Resistance to Btk products and resulting failure in diamondback moth control has resulted in the extensive use of Bt subsp. aizawai-based insecticides in certain locations BIB005 . Insects in two colonies from Hawaii showed up to a 20-fold resistance to Cry1Ca, compared to several other colonies, including one obtained earlier from the same location, as well as moderately high resistance to Cry1Ab and Btk-based formulations BIB005 . A Malaysian strain simultaneously highly resistant to the kurstaki and the aizawai subspecies was apparently mutated in several loci . A Cry1Ab resistance allele associated with reduced binding to brush border membrane vesicles receptors was partially responsible for resistance to both subspecies. Genetic determinants responsible for subspecies kurstaki-specific and subspecies aizawai-specific resistance segregated separately from each other and from the Cry1Ab resistance allele in genetic experiments BIB004 . After less than 2 decades of intensive use of Btk in crucifer agriculture, resistant insects have evolved in numerous geographically isolated regions of the world, and subspecies aizawai resistance is beginning to appear even more rapidly. Defying the expectations of scientists monitoring transgenic crops such as corn and cotton that produce insecticidal proteins derived from Bt, target insect pests have developed little or no resistance to Bt crops thus far, according to US Department of Agriculture-funded scientists. These findings suggest that transgenic Bt crops could enjoy more extended, more profitable commercial life cycles and that the measures established to mitigate resistance before the crops were introduced are paying off . Evolution of resistance in pests can reduce the effectiveness of insecticidal proteins from Bt produced by transgenic crops. Field outcomes support theoretical predictions that factors delaying resistance include recessive inheritance of resistance, low initial frequency of resistance alleles, abundant refuges of non-Bt host plants and two-toxin Bt crops deployed separately from one-toxin Bt crops. The results imply that proactive evaluation of the inheritance and initial frequency of resistance are useful for predicting the risk of resistance and improving strategies to sustain the effectiveness of Bt crops BIB006 .
An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Resistance Management <s> We lack an empirical basis on which to judge the expected durability of crops that express one or more insecticidal proteins and must therefore rely upon theoretical population genetic models in assessing how best to delay pest adaptation to these toxins. A number of studies using such models indicate that expression of toxins at very high levels could slow pest adaptation to a crawl if the ecology and genetics of the pest and cropping system fit specific assumptions. These assumptions relate to: (1) inheritance of resistance factors; (2) ecological costs of resistance factors; (3) behavioral response of larvae and adults to the toxins; (4) plant‐to‐plant movement of larvae; (5) adult dispersal and mating behavior; and (6) distribution of host plants that do and do not produce the toxin(s). This paper includes a discussion of whether the biology of insect pests of a number of cropping systems that are targets for toxin‐expressing plants fit assumptions that are conducive to slowing pest adaptation. Emphas... <s> BIB001 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Resistance Management <s> The evolution of resistance in pests such as the European corn borer will imperil transgenic maize varieties that express insecticidal crystal proteins of Bacillus thuringiensis. Patchworks of treated and untreated fields can delay the evolution of pesticide resistance, but the untreated refuge fields are likely to sustain heavy damage. A strategy that exploits corn borer preferences and movements can eliminate this problem. Computer simulation indicates that this approach can delay the evolution of resistance and reduce insect damage in the untreated fields of a patchwork planting regime. <s> BIB002 </s> An Overview on the Crystal Toxins from Bacillus thuringiensis <s> Resistance Management <s> During the past decade the pesticidal bacterium Bacillus thuringiensis has been the subject of intensive research. These efforts have yielded considerable data about the complex relationships between the structure, mechanism of action, and genetics of the organism’s pesticidal crystal proteins, and a coherent picture of these relationships is beginning to emerge. Other studies have focused on the ecological role of the B. thuringiensis crystal proteins, their performance in agricultural and other natural settings, and the evolution of resistance mechanisms in target pests. Armed with this knowledge base and with the tools of modern biotechnology, researchers are now reporting promising results in engineering more-useful toxins and formulations, in creating transgenic plants that express pesticidal activity, and in constructing integrated management strategies to insure that these products are utilized with maximum efficiency and benefit. <s> BIB003
Resistance management strategies try to prevent or diminish the selection of the rare individuals carrying resistance genes and hence to keep the frequency of resistance genes sufficiently low for insect control BIB002 BIB001 . Proposed strategies include: the use of multiple toxins (stacking or pyramiding), crop rotation, high or ultrahigh dosages, and spatial or temporal refugia (toxin-free areas). Retrospective analysis of resistance development does support the use of refugia . Experience with transgenic crops expressing cry genes grown under different agronomic conditions is essential to define the requirements of resistance management. In transgenic plants, selection pressure could be reduced by restricting the expression of the crystal protein genes to certain tissues of the crop (those most susceptible to pest damage) so that only certain parts of the plant are fully protected, the remainder providing a form of spatial refuge. It has been proposed that cotton lines in which Cry gene expression is limited to the young bolls may not suffer dramatic yield loss from Heliothis larvae feeding on other plant structures, since cotton plants can compensate for a high degree of pest damage . Another management option is the rotation of plants or sprays of a particular Bt toxin with those having another toxin type that binds to a different receptor. A very attractive resistance management tactics is the combination of a high-dose strategy with the use of refugia BIB003 .
Word Embeddings: A Survey <s> Introduction <s> In a document retrieval, or other pattern matching environment where stored entities (documents) are compared with each other or with incoming patterns (search requests), it appears that the best indexing (property) space is one where each entity lies as far away from the others as possible; in these circumstances the value of an indexing system may be expressible as a function of the density of the object space; in particular, retrieval performance may correlate inversely with space density. An approach based on space density computations is used to choose an optimum indexing vocabulary for a collection of documents. Typical evaluation results are shown, demonstating the usefulness of the model. <s> BIB001 </s> Word Embeddings: A Survey <s> Introduction <s> If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize.com/projects/wordreprs/ <s> BIB002 </s> Word Embeddings: A Survey <s> Introduction <s> Continuous space language models have recently demonstrated outstanding results across a variety of tasks. In this paper, we examine the vector-space word representations that are implicitly learned by the input-layer weights. We find that these representations are surprisingly good at capturing syntactic and semantic regularities in language, and that each relationship is characterized by a relation-specific vector offset. This allows vector-oriented reasoning based on the offsets between words. For example, the male/female relationship is automatically learned, and with the induced vector representations, “King Man + Woman” results in a vector very close to “Queen.” We demonstrate that the word vectors capture syntactic regularities by means of syntactic analogy questions (provided with this paper), and are able to correctly answer almost 40% of the questions. We demonstrate that the word vectors capture semantic regularities by using the vector offset method to answer SemEval-2012 Task 2 questions. Remarkably, this method outperforms the best previous systems. <s> BIB003 </s> Word Embeddings: A Survey <s> Introduction <s> Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. <s> BIB004 </s> Word Embeddings: A Survey <s> Introduction <s> Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts. <s> BIB005 </s> Word Embeddings: A Survey <s> Introduction <s> We present a comprehensive study of evaluation methods for unsupervised embedding techniques that obtain meaningful representations of words from text. Different evaluations result in different orderings of embedding methods, calling into question the common assumption that there is one single optimal vector representation. We present new evaluation techniques that directly compare embeddings with respect to specific queries. These methods reduce bias, provide greater insight, and allow us to solicit data-driven relevance judgments rapidly and accurately through crowdsourcing. <s> BIB006
The task of representing words and documents is part and parcel of most, if not all, Natural Language Processing (NLP) tasks. In general, it has been found to be useful to represent them as vectors, which have an appealing, intuitive interpretation, can be the subject of useful operations (e.g. addition, subtraction, distance measures, etc) and lend themselves well to be used in many Machine Learning (ML) algorithms and strategies. The Vector Space Model (VSM), generally attributed to BIB001 and stemming from the Information Retrieval (IR) community, is arguably the most successful and influential model to encode words and documents as vectors. Another very important part of natural language-based solutions is, of course, the study of language models. A language model is a statistical model of language usage. It focuses mainly on predicting the next word given a number of previous words. This is very useful, for instance, in speech recognition software, where one needs to correctly decide what is the word said by the speaker, even when signal quality is poor or there is a lot of background noise. These two seemingly independent fields have arguably been brought together by recent research * Geraldo Xexéo is also with the Mathematics Institute (IM-UFRJ), Federal University of Rio de Janeiro on Neural Network Language Models (NNLMs), with Bengio et al. (2003) ) having developed the first 1 large-scale language models based on neural nets. Their idea was to reframe the problem as an unsupervised learning problem. A key feature of this solution is the way raw words vectors are first projected onto a so-called embedding layer before being fed into other layers of the network. Among other reasons, this was imagined to help ease the effect of the curse of dimensionality on language models, and help generalization (Bengio et al. (2003) ). With time, such word embeddings have emerged as a topic of research in and of themselves, with the realization that they can be used as standalone features in many NLP tasks BIB002 ) and the fact that they encode surprisingly accurate syntactic and semantic word relationships BIB003 ). More recently 2 , other ways of creating embeddings have surfaced, which rely not on neural networks and embedding layers but on leveraging word-context matrices to arrive at vector representations for words. Among the most influential models we can cite the GloVe model BIB004 ). These two types of model have something in common, namely their reliance on the assumption that words with similar contexts (other words) have the same meaning. This has been called the distributional hypothesis, and has been suggested some time ago by , among others. This brings us to the definition of word embeddings we will use in this article, as suggested by the literature (for instance, BIB002 ; Blacoe and Lapata (2012); BIB006 ), according to which word embeddings are dense, distributed, fixed-length word vectors, built using word co-occurrence statistics as per the distributional hypothesis. Embedding models derived from neural network language models have BIB005 ) been called prediction-based models, since they usually leverage language models, which predict the next word given its context. Other matrixbased models have been called count-based models, due to their taking into account global wordcontext co-occurrence counts to derive word embeddings. 3 These are described next. This survey is structured as follows: in section 2 we describe the origins of statistical language modelling. In section 3 we give an overview of word embeddings, generated both by so-called prediction-based models and by countbased methods. In Section 4 we conclude and in Section 5 we provide some pointers to promising further research topics.
Word Embeddings: A Survey <s> Motivation <s> If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize.com/projects/wordreprs/ <s> BIB001 </s> Word Embeddings: A Survey <s> Motivation <s> Continuous space language models have recently demonstrated outstanding results across a variety of tasks. In this paper, we examine the vector-space word representations that are implicitly learned by the input-layer weights. We find that these representations are surprisingly good at capturing syntactic and semantic regularities in language, and that each relationship is characterized by a relation-specific vector offset. This allows vector-oriented reasoning based on the offsets between words. For example, the male/female relationship is automatically learned, and with the induced vector representations, “King Man + Woman” results in a vector very close to “Queen.” We demonstrate that the word vectors capture syntactic regularities by means of syntactic analogy questions (provided with this paper), and are able to correctly answer almost 40% of the questions. We demonstrate that the word vectors capture semantic regularities by using the vector offset method to answer SemEval-2012 Task 2 questions. Remarkably, this method outperforms the best previous systems. <s> BIB002
To our knowledge, there is no comprehensive survey on word embeddings 4 , let alone one that includes modern developments in this area. Furthermore, we think such a work is useful in the light of the usefulness of word embeddings in a variety of downstream NLP tasks BIB001 ) and strikingly accurate semantic information encoded in such vectors BIB002 ).
Word Embeddings: A Survey <s> Background: The Vector Space Model and Statistical Language Modelling <s> In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word–word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors—namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)—that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study. <s> BIB001 </s> Word Embeddings: A Survey <s> Background: The Vector Space Model and Statistical Language Modelling <s> The supremacy of n-gram models in statistical language modelling has recently been challenged by parametric models that use distributed representations to counteract the difficulties caused by data sparsity. We propose three new probabilistic language models that define the distribution of the next word in a sequence given several preceding words by using distributed representations of those words. We show how real-valued distributed representations for words can be learned at the same time as learning a large set of stochastic binary hidden features that are used to predict the distributed representation of the next word from previous distributed representations. Adding connections from the previous states of the binary hidden features improves performance as does adding direct connections between the real-valued distributed representations. One of our models significantly outperforms the very best n-gram models. <s> BIB002 </s> Word Embeddings: A Survey <s> Background: The Vector Space Model and Statistical Language Modelling <s> We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance. <s> BIB003 </s> Word Embeddings: A Survey <s> Background: The Vector Space Model and Statistical Language Modelling <s> We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by Mikolov et al., and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS's solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS's factorization. <s> BIB004
In order to understand the reasons behind the emergence and development of word embeddings, we think two topics are of utmost importance, namely the vector space model and statistical language modelling. The vector space model is important inasmuch as it underpins a large part of work on NLP; it allows for the use of mature mathematical theory 3 Note that a link between both types of models has been suggested by BIB004 . 4 There are, however, systematic studies on the performance of different weighting strategies and distance measures on word-context matrices, authored by BIB001 . (such as linear algebra and statistics) to support our work. Additionally, vector representations are required for a wide range of machine learning algorithms and methods which are used to help address NLP tasks. Modern research on word embeddings (particularly prediction-based models) has been, to some extent, borne out of attempts to make language modelling more efficient and more accurate. In fact, word embeddings (Bengio et al. (2003) ; Bengio and Senécal (2003) ; BIB002 , to cite a few) have been treated as by-products of language models, and only after some time (arguably after BIB003 ) has the building of word embeddings been decoupled from the task of language models. We give brief introductions to these two topics next.
Word Embeddings: A Survey <s> Statistical Language Modelling <s> Speech recognition is formulated as a problem of maximum likelihood decoding. This formulation requires statistical models of the speech production process. In this paper, we describe a number of statistical models for use in speech recognition. We give special attention to determining the parameters for such models from sparse data. We also describe two decoding methods, one appropriate for constrained artificial languages and one appropriate for more realistic decoding tasks. To illustrate the usefulness of the methods described, we review a number of decoding results that have been obtained with them. <s> BIB001 </s> Word Embeddings: A Survey <s> Statistical Language Modelling <s> The description of a novel type of m-gram language model is given. The model offers, via a nonlinear recursive procedure, a computation and space efficient solution to the problem of estimating probabilities from sparse data. This solution compares favorably to other proposed methods. While the method has been developed for and successfully implemented in the IBM Real Time Speech Recognizers, its generality makes it applicable in other areas where the problem of estimating probabilities from sparse data arises. <s> BIB002 </s> Word Embeddings: A Survey <s> Statistical Language Modelling <s> We address the problem of predicting a word from previous words in a sample of text. In particular, we discuss n-gram models based on classes of words. We also discuss several statistical algorithms for assigning words to classes based on the frequency of their co-occurrence with other words. We find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings, depending on the nature of the underlying statistics. <s> BIB003 </s> Word Embeddings: A Survey <s> Statistical Language Modelling <s> Maximum entropy models are considered by many to be one of the most promising avenues of language modeling research. Unfortunately, long training times make maximum entropy research difficult. We present a speedup technique: we change the form of the model to use classes. Our speedup works by creating two maximum entropy models, the first of which predicts the class of each word, and the second of which predicts the word itself. This factoring of the model leads to fewer nonzero indicator functions, and faster normalization, achieving speedups of up to a factor of 35 over one of the best previous techniques. It also results in typically slightly lower perplexities. The same trick can be used to speed training of other machine learning techniques, e.g. neural networks, applied to any problem with a large number of outputs, such as language modeling. <s> BIB004 </s> Word Embeddings: A Survey <s> Statistical Language Modelling <s> The supremacy of n-gram models in statistical language modelling has recently been challenged by parametric models that use distributed representations to counteract the difficulties caused by data sparsity. We propose three new probabilistic language models that define the distribution of the next word in a sequence given several preceding words by using distributed representations of those words. We show how real-valued distributed representations for words can be learned at the same time as learning a large set of stochastic binary hidden features that are used to predict the distributed representation of the next word from previous distributed representations. Adding connections from the previous states of the binary hidden features improves performance as does adding direct connections between the real-valued distributed representations. One of our models significantly outperforms the very best n-gram models. <s> BIB005 </s> Word Embeddings: A Survey <s> Statistical Language Modelling <s> We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance. <s> BIB006
Statistical language models are probabilistic models of the distribution of words in a language. For example, they can be used to calculate the likelihood of the next word given the words immediately preceding it (its context). One of their earliest uses has been in the field of speech recognition BIB001 ), to aid in correctly recognizing words and phrases in sound signals that have been subjected to noise and/or faulty channels. In the realm of textual data, such models are useful in a wide range of NLP tasks, as well as other related tasks, such as information retrieval. While a full probabilistic model containing the likelihood of every word given all possible word contexts that may arise in a language is clearly intractable, it has been empirically observed that satisfactory results are obtained using a context size as small as 3 words BIB004 ). A simple mathematical formulation of such an n-gram model with window size equal to T follows: where w t is the t-th word and w T i refers to the sequence of words from w i to w T , i.e. (w i , w i+1 , w i+2 ...w T ). P (w t |w t−1 1 ) refers to the fraction of times w t appears after the sequence w t−1 1 . Actual prediction of the next word given a context is done via maximum likelihood estimation (MLE), over all words in the vocabulary. Some problems reported with these models have been (Bengio et al. (2003) ) the high dimensionality involved in calculating discrete joint distributions of words with vocabulary sizes in the order of 100,000 words and difficulties related to generalizing the model to word sequences not present in the training set. Early attempts of mitigating these effects, particularly those related to generalization to unseen phrases, include the use of smoothing, e.g. pretending every new sequence has count one, rather than zero in the training set (this is referred to as add-one or Laplace smoothing. Also, backing off to increasingly shorter contexts when longer contexts aren't available BIB002 ). Another strategy which reduces the number of calculations needed and helps with generalization is the clustering of words in so-called classes (cf. now famous Brown Clustering BIB003 ). Finally, neural networks (Bengio et al. (2003); Bengio and Senécal (2003); BIB006 ) and log-linear models BIB005 ; Mikolov et al. (2013b,c) ) have also been used to train language models (giving rise to socalled neural language models), delivering better results, as measured by perplexity.
Word Embeddings: A Survey <s> Word Embeddings <s> Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts. <s> BIB001 </s> Word Embeddings: A Survey <s> Word Embeddings <s> Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. <s> BIB002 </s> Word Embeddings: A Survey <s> Word Embeddings <s> Most existing word embedding methods can be categorized into Neural Embedding Models and Matrix Factorization (MF)-based methods. However some models are opaque to probabilistic interpretation, and MF-based methods, typically solved using Singular Value Decomposition (SVD), may incur loss of corpus information. In addition, it is desirable to incorporate global latent factors, such as topics, sentiments or writing styles, into the word embedding model. Since generative models provide a principled way to incorporate latent factors, we propose a generative word embedding model, which is easy to interpret, and can serve as a basis of more sophisticated latent factor models. The model inference reduces to a low rank weighted positive semidefinite approximation problem. Its optimization is approached by eigendecomposition on a submatrix, followed by online blockwise regression, which is scalable and avoids the information loss in SVD. In experiments on 7 common benchmark datasets, our vectors are competitive to word2vec, and better than other MF-based methods. <s> BIB003
As mentioned before, word embeddings are fixedlength vector representations for words. There are multiple ways to obtain such representations, and this section will explore various different approaches to training word embeddings, detailing and they work and where they differ from each other. Word embeddings are commonly BIB001 ; BIB002 ; BIB003 ) categorized into two types, depending upon the strategies used to induce them. Methods which leverage local data (e.g. a word's context) are called **prediction-based** models, and are generally reminiscent of neural language models. On the other hand, methods that use global information, generally corpus-wide statistics such as word counts and frequencies are called **countbased** models. We describe both types next.
Word Embeddings: A Survey <s> Morin and Bengio 2005 <s> The supremacy of n-gram models in statistical language modelling has recently been challenged by parametric models that use distributed representations to counteract the difficulties caused by data sparsity. We propose three new probabilistic language models that define the distribution of the next word in a sequence given several preceding words by using distributed representations of those words. We show how real-valued distributed representations for words can be learned at the same time as learning a large set of stochastic binary hidden features that are used to predict the distributed representation of the next word from previous distributed representations. Adding connections from the previous states of the binary hidden features improves performance as does adding direct connections between the real-valued distributed representations. One of our models significantly outperforms the very best n-gram models. <s> BIB001 </s> Word Embeddings: A Survey <s> Morin and Bengio 2005 <s> Neural probabilistic language models (NPLMs) have been shown to be competitive with and occasionally superior to the widely-used n-gram language models. The main drawback of NPLMs is their extremely long training and testing times. Morin and Bengio have proposed a hierarchical language model built around a binary tree of words, which was two orders of magnitude faster than the non-hierarchical model it was based on. However, it performed considerably worse than its non-hierarchical counterpart in spite of using a word tree created using expert knowledge. We introduce a fast hierarchical language model along with a simple feature-based algorithm for automatic construction of word trees from the data. We then show that the resulting models can outperform non-hierarchical neural models as well as the best n-gram models. <s> BIB002
Full softmax prediction is replaced by a more efficient binary tree approach, where only binary decisions at each node leading to the target word are needed. Neural Net, Hierarchical Softmax Report a speed up with respect to Bengio and Senecal 2003 (over three times as fast during training and 100 times as fast during testing), but at a slightly lower score (perplexity). BIB001 Among other models, the log-bilinear model is introduced here. Log-bilinear models are neural networks with a single, linear, hidden layer BIB002 ).
Word Embeddings: A Survey <s> Log-linear Model <s> In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy. <s> BIB001 </s> Word Embeddings: A Survey <s> Log-linear Model <s> Neural probabilistic language models (NPLMs) have been shown to be competitive with and occasionally superior to the widely-used n-gram language models. The main drawback of NPLMs is their extremely long training and testing times. Morin and Bengio have proposed a hierarchical language model built around a binary tree of words, which was two orders of magnitude faster than the non-hierarchical model it was based on. However, it performed considerably worse than its non-hierarchical counterpart in spite of using a word tree created using expert knowledge. We introduce a fast hierarchical language model along with a simple feature-based algorithm for automatic construction of word trees from the data. We then show that the resulting models can outperform non-hierarchical neural models as well as the best n-gram models. <s> BIB002
First appearance of the log-linear model, which is a simpler model, much faster and slightly outscores the model from Bengio et al. (2003) . BIB002 Authors train the log-bilinear model using hierarchical softmax, as suggested in BIB001 , but the word tree is learned rather than obtained from external sources. Log-linear Model, Hierarchical Softmax Reports being 200 times as fast as previous log-bilinear models.
Word Embeddings: A Survey <s> Log-linear Model, Hierarchical Softmax <s> In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy. <s> BIB001 </s> Word Embeddings: A Survey <s> Log-linear Model, Hierarchical Softmax <s> The supremacy of n-gram models in statistical language modelling has recently been challenged by parametric models that use distributed representations to counteract the difficulties caused by data sparsity. We propose three new probabilistic language models that define the distribution of the next word in a sequence given several preceding words by using distributed representations of those words. We show how real-valued distributed representations for words can be learned at the same time as learning a large set of stochastic binary hidden features that are used to predict the distributed representation of the next word from previous distributed representations. Adding connections from the previous states of the binary hidden features improves performance as does adding direct connections between the real-valued distributed representations. One of our models significantly outperforms the very best n-gram models. <s> BIB002 </s> Word Embeddings: A Survey <s> Log-linear Model, Hierarchical Softmax <s> Neural probabilistic language models (NPLMs) have been shown to be competitive with and occasionally superior to the widely-used n-gram language models. The main drawback of NPLMs is their extremely long training and testing times. Morin and Bengio have proposed a hierarchical language model built around a binary tree of words, which was two orders of magnitude faster than the non-hierarchical model it was based on. However, it performed considerably worse than its non-hierarchical counterpart in spite of using a word tree created using expert knowledge. We introduce a fast hierarchical language model along with a simple feature-based algorithm for automatic construction of word trees from the data. We then show that the resulting models can outperform non-hierarchical neural models as well as the best n-gram models. <s> BIB003 </s> Word Embeddings: A Survey <s> Log-linear Model, Hierarchical Softmax <s> We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance. <s> BIB004 </s> Word Embeddings: A Survey <s> Log-linear Model, Hierarchical Softmax <s> We present a new estimation principle for parameterized statistical models. The idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise, using the model log-density function in the regression nonlinearity. We show that this leads to a consistent (convergent) estimator of the parameters, and analyze the asymptotic variance. In particular, the method is shown to directly work for unnormalized models, i.e. models where the density function does not integrate to one. The normalization constant can be estimated just like any other parameter. For a tractable ICA model, we compare the method with other estimation methods that can be used to learn unnormalized models, including score matching, contrastive divergence, and maximum-likelihood where the normalization constant is estimated with importance sampling. Simulations show that noise-contrastive estimation offers the best trade-off between computational and statistical efficiency. The method is then applied to the modeling of natural images: We show that the method can successfully estimate a large-scale two-layer model and a Markov random field. <s> BIB005 </s> Word Embeddings: A Survey <s> Log-linear Model, Hierarchical Softmax <s> The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. <s> BIB006 </s> Word Embeddings: A Survey <s> Log-linear Model, Hierarchical Softmax <s> Continuous space language models have recently demonstrated outstanding results across a variety of tasks. In this paper, we examine the vector-space word representations that are implicitly learned by the input-layer weights. We find that these representations are surprisingly good at capturing syntactic and semantic regularities in language, and that each relationship is characterized by a relation-specific vector offset. This allows vector-oriented reasoning based on the offsets between words. For example, the male/female relationship is automatically learned, and with the induced vector representations, “King Man + Woman” results in a vector very close to “Queen.” We demonstrate that the word vectors capture syntactic regularities by means of syntactic analogy questions (provided with this paper), and are able to correctly answer almost 40% of the questions. We demonstrate that the word vectors capture semantic regularities by using the vector offset method to answer SemEval-2012 Task 2 questions. Remarkably, this method outperforms the best previous systems. <s> BIB007 </s> Word Embeddings: A Survey <s> Log-linear Model, Hierarchical Softmax <s> We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities. <s> BIB008 </s> Word Embeddings: A Survey <s> Log-linear Model, Hierarchical Softmax <s> This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore~CPU, and classify half a million sentences among~312K classes in less than a minute. <s> BIB009
Trained on DistBelief, which is the precursor to TensorFlow ). Reports better results than SGNS. Embeddings are also reported to be good for composition (into sentence, document embeddings). partition function or normalization factor required by softmax output layers 5 , such as those in neural network language models (NNLMs). Using a concept called importance sampling ), they managed to bypass calculation of the costly normalization factor, estimating instead gradients in the neural net using an auxiliary distribution (e.g. old n-gram language models) and sampling random examples from the vocabulary. They report gains of a factor of 19 in training time, with respect to the previous model, with similar scores (as measured by perplexity). A little bit later, Morin and Bengio 6 (2005) have 5 Softmax output layers are used when you train neural networks that need to predict multiple outputs, in this case the probability of each word in the vocabulary being the next word, given the context. 6 To our knowledge, this is the first time the term word suggested yet another approach for speeding up training and testing times, using a Hierarchical Softmax layer. They realized that, if one arranged the output words in a hierarchical binary tree structure, one could use, as a proxy for calculating the full distribution for each word, the probability that, at each node leading to the word, the correct path is chosen. Since the height of a binary tree over a set V of words is |V |/ log(|V |), this could yield exponential speedup. In practice, gains were less pronounced, but they still managed gains of a factor of 3 for training times and 100 for testing times, w.r.t. the model using importance sampling. BIB002 were probably the first authors to suggest the Log-bilinear Model 7 (LBL), embedding was used in this context.which has been very influential in later works as well. Another article by BIB003 can be seen as an extension of the LBL BIB002 ) model, using a slightly modified version of the hierarchical softmax scheme proposed by BIB001 , yielding a socalled Hierarchical Log-bilinear Model (HLBL). Whereas BIB001 used a prebuilt word tree from WordNet, BIB003 learned such a tree specifically for the task at hand. In addition to other minor optimizations, they reports large gains over previous LBL models (200 times as fast) and conclude that using purpose-built word trees was key to such results. Somewhat parallel to the works just mentioned, BIB004 approached the problem from a slightly different angle; they were the first to design model with the specific intent of learning embeddings only. In previous models, embeddings were just treated as an interesting by product of the main task (usually language models). In addition to this, they also introduced two improvements worth mentioning: they used words' full contexts (before and after) to predict the centre word 8 . Perhaps most importantly, they introduced a more clever way of leveraging unlabelled data for producing good embeddings: instead of training a language model (which is not the objective here), they expanded the dataset with false or negative examples 9 and simply trained a model that could tell positive (actually occurring) from false examples. 10 Here we should mention two specific contributions by Mikolov et al. (2009; , which have been used in later models. In the first work, (Mikolov et al. (2009) ) a two-step method for bootstraping a NNLM was suggested, whereby a first model was trained using a single word as context. Then, the full model (with larger context) was trained, using as initial embeddings those found by the first step. In (Mikolov et al. (2010) ), the idea of using Recurrent Neural Networks (RNNs) to train lanpendix A for more information. 8 Previous models focused on building language models, so they just used the left context. 9 I.e. sequences of words with the actual centre word replaced by a random word from the vocabulary. 10 This has been BIB006 ) called negative sampling and speeds up training because one can avoid costly operations such as calculating cross-entropies and softmax terms. guage models is first suggested; the argument is that RNNs keep state in the hidden layers, helping the model remember arbitrarily long contexts, and one would not need to decide, beforehand, how many words to use as context in either side. In 2012 Mnih and Teh have suggested further efficiency gains to the training of NNLMs. By leveraging Noise-contrastive Estimation (NCE). 11 NCE BIB005 ) is a way of estimating probability distributions by means of binary decisions over true/false examples. 12 . This has enabled the authors to further reduce training times for NNLMs. In addition to faster training times, they also report better perplexity score w.r.t. previous neural language models. It could be said that, in 2013, with BIB007 BIB008 BIB006 ) the NLP community have again (the main other example being BIB004 ) had its attention drawn to word embeddings as a topic worthy of research in and of itself. These authors analyzed the embeddings obtained with the training of a recurrent neural network model (Mikolov et al. (2010) ) with an eye to finding possible syntactic regularities possibly encoded in the vectors. Perhaps surprisingly, event for the authors themselves, they did find not only syntactic but also semantic regularities in the data. Many common relationships such as male-female, singularplural, etc actually correspond to arithmetical operations one can perform on word vectors (see Figure 1 for an example). A little later, in 2013b and 2013c, Mikolov et al. have introduced two models for learning embeddings, namely the continuous bag-of-words (CBOW) and skip-gram (SG) models. Both of these models are log-linear models (as seen in previous works) and use the two-step procedure (Mikolov et al. (2009) ) for training. The main difference between CBOW and SG lies in the loss function used to update the model; while CBOW trains a model that aims to predict the centre word based upon its context, in SG the roles are reversed, and the centre word is, instead, used to predict each word appearing in its context. The first versions of CBOW and SG BIB008 ) use hierarchical softmax layers, while the variants 13 suggested in BIB006 use negative sampling instead. Furthermore, the variants introduced subsampling of frequent words, to reduce the amount of noise due to overly frequent words and accelerate training. These variants were shown to perform better, with faster training times. Among the most recent contributions to prediction-based models for word embeddings one can cite the two articles BIB009 and BIB009 ) usually cited as the sources of the FastText 14 toolkit, made available by Facebook, Inc. They have suggested an improvement over the skip-gram model from BIB006 , whereby one learns not word embeddings, but n-gram embeddings (which can be composed to form words). The rationale behind this decision lies in the fact that languages that rely heavily on morphology and compositional word-building (such as Turkish, Finnish and other highly inflexional languages) have some information encoded in the word parts themselves, which can be used to help generalize to unseen words. They report better results w.r.t. SGNS (skip-gram variant with negative sampling) BIB006 ), particularly in languages such as German, French and Spanish. A structured comparison of prediction-based models for building word embeddings can be seen on Table 1 .
Word Embeddings: A Survey <s> Count-based Models <s> A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising. <s> BIB001 </s> Word Embeddings: A Survey <s> Count-based Models <s> We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance. <s> BIB002 </s> Word Embeddings: A Survey <s> Count-based Models <s> Neural probabilistic language models (NPLMs) have been shown to be competitive with and occasionally superior to the widely-used n-gram language models. The main drawback of NPLMs is their extremely long training and testing times. Morin and Bengio have proposed a hierarchical language model built around a binary tree of words, which was two orders of magnitude faster than the non-hierarchical model it was based on. However, it performed considerably worse than its non-hierarchical counterpart in spite of using a word tree created using expert knowledge. We introduce a fast hierarchical language model along with a simple feature-based algorithm for automatic construction of word trees from the data. We then show that the resulting models can outperform non-hierarchical neural models as well as the best n-gram models. <s> BIB003 </s> Word Embeddings: A Survey <s> Count-based Models <s> The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. <s> BIB004 </s> Word Embeddings: A Survey <s> Count-based Models <s> Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. <s> BIB005
As mentioned before, count-based models are another way of producing word embeddings, not 13 These have been published under the popular Word2Vec toolkit (https://code.google.com/archive/p/word2vec/). 14 https://research.fb.com/projects/fasttext/ by training algorithms that predict the next word given its context (as is the case in language modelling) but by leveraging word-context cooccurrence counts globally in a corpus. These are very often represented (Turney and Pantel (2010)) as word-context matrices. The earliest relevant example of leveraging word-context matrices to produce word embeddings is, of course, Latent Semantic Analysis (LSA) BIB001 ) where SVD is applied to a term-document 15 matrix. This solution was initially envisioned to help with information retrieval. While one is probably more interested in document vectors in IR, it's also possible to obtain word vectors this way; one just needs to look at the rows (rather than columns) of the factorized matrix. A little later, have introduced the Hyperspace Analogue to Language (HAL). Their strategy can be described as follows: for each word in the vocabulary, analyze all contexts it appears in and calculate the co-occurrence count between the target word and each context word, inversely proportional to the distance from the context word to the target word. The authors report good results (as measured by analogy tasks), with an optimal context window size of 8. The original HAL model did not apply any normalization to word co-occurrence counts found. Therefore, very common words like the contribute disproportionately to all words that co-occur with them. have found this to be a problem, and introduced the COALS method, introducing normalization strategies to factor out such frequency differences in words. Instead of using raw counts, they suggest it's better to consider the conditional co-occurrence, i.e. how much more more likely a word a is to co-occur with word b than it is to co-occur with a random word from the vocabulary. They report better results than previous methods, using the SVDfactorized variant 16 . A somewhat different alternative was proposed by , in which they introduce the Low Rank Multi-View Learning (LR-MVL) method. In short, it's an iterative algorithm where embeddings are derived by leveraging Canonical Correlation Analysis (CCA) BIB001 LSA is introduced. Singular value decomposition (SVD) is applied on a term-document matrix. Used mostly for IR, but can be used to build word embeddings. The HAL method is introduced. Scan the whole corpus one word at a time, with a context window around the word to collect weighted word-word co-occurrence counts, building a word-word co-occurrence matrix. Reported an optimal context size of 8. Authors introduce the COALS method, which is an improved version of HAL, using normalization procedures to stop very common terms from overly affecting cooccurrence counts. Optimal variant used SVD factorization. Reports gains over HAL ), LSA BIB001 ) and other methods. LR-MVL is introduced. Uses CCA (Canonical Correlation Analysis) between left and right contexts to induce word embeddings. Reports gains over C&W embeddings BIB002 ), HLBL BIB003 ) and other methods, over many NLP tasks. Lebret and Collobert 2013 Applied a modified version of Principal Component Analysis (called Hellinger PCA) to the word-context matrix. Embeddings can be tuned before being used in actual NLP tasks. Also reports gains over C&W embeddings, HLBL and other methods, over many NLP tasks. BIB005 Introduced GloVe, a log-linear model trained to encode semantic relationships between words as vector offsets in the learned vector space, using the insight that cooccurrence ratios, rather than raw counts, are the actual conveyors of word meaning. Reports gains over all previous count-based models and also SGNS BIB004 ), in multiple NLP tasks. between the left and right contexts of a given word. One interesting feature of this model is that when embeddings are used for downstream NLP tasks, they are concatenated with embeddings for their context words too, yielding better results. Authors report gains over other matrix factorization methods, as well as neural embeddings, over many NLP tasks. Lebret and Collobert (2013) have also contributed to count-based models by suggesting that a Hellinger PCA 17 transformation be applied to the word-context matrix instead. Results are reported to be better than previous count-based models such as LR-MVL and neural embeddings, such as those by BIB002 and HLBL Mnih and Hinton (2008) . The last model we will cover in this section is the well-known GloVe 18 by BIB005 . This model starts at the insight that ratios of co-occurrences, rather than raw counts, encode actual semantic information about pair of words. This relationship is used to derive a suitable loss function for a log-linear model, which is then trained to maximize the similarity of every word pair, as measured by the ratios of co-occurrences mentioned earlier. Authors report better results than other count-based models, as well as prediction based models such as SGNS BIB004 ), in tasks such as word analogy and NER (named entity recognition). A structured comparison of count-based models for building word embeddings can be seen on Table 2 .
Word Embeddings: A Survey <s> Conclusion <s> Passage retrieval is an important component common to many question answering systems. Because most evaluations of question answering systems focus on end-to-end performance, comparison of common components becomes difficult. To address this shortcoming, we present a quantitative evaluation of various passage retrieval algorithms for question answering, implemented in a framework called Pauchok. We present three important findings: Boolean querying schemes perform well in the question answering task. The performance differences between various passage retrieval algorithms vary with the choice of document retriever, which suggests significant interactions between document retrieval and passage retrieval. The best algorithms in our evaluation employ density-based measures for scoring query terms. Our results reveal future directions for passage retrieval and question answering. <s> BIB001 </s> Word Embeddings: A Survey <s> Conclusion <s> If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize.com/projects/wordreprs/ <s> BIB002 </s> Word Embeddings: A Survey <s> Conclusion <s> We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model's ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines. <s> BIB003
Word embeddings have been found to be very useful for many NLP tasks, including but not limited to Chunking BIB002 ), Question Answering BIB001 ), Parsing and Sentiment Analysis BIB003 ). We have here outlined some of the main works and approaches used so far to derive these embeddings, both using prediction-based models, which model the probability of the next word given a sequence of words (as is the case with language models) and count-based models, which leverage global co-occurrence statistics in wordcontext matrices. Many of the suggested advances seen in the literature have been incorporated in widely used toolkits, such as Word2Vec, gensim 19 , FastText, and GloVe, resulting in ever more accurate and faster word embeddings, ready to be used in NLP tasks.
Word Embeddings: A Survey <s> The link between prediction-based and count-based models <s> The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. <s> BIB001 </s> Word Embeddings: A Survey <s> The link between prediction-based and count-based models <s> We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by Mikolov et al., and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS's solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS's factorization. <s> BIB002
For example, BIB002 have suggested that the SGNS model BIB001 ) actually is equivalent to using a slightly modified word-context matrix, weighted using PMI (pointwise mutual information) statistics. Insight on what links the two models may yield more advances in both areas.
Word Embeddings: A Survey <s> Composing word embeddings for higher-level entities <s> This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore~CPU, and classify half a million sentences among~312K classes in less than a minute. <s> BIB001
While research on how to compose word vectors to represent higher-level entities such as sentences and documents is not altogether new (generally under the name of distributional compositionality), recent works have adapted solutions specifically for neural word embeddings: we can cite here Paragraph2Vec (Le and Mikolov (2014)), Skip-Thought Vectors by and also FastText itself BIB001 and BIB001 ).
Survey on Automatic Vehicle Number Plate Localization <s> Detection of License Plate Using Edge Information <s> This paper proposes an approach to developing an automatic license plate recognition system. Car images are taken from various positions outdoors. Because of the variations of angles from the camera to the car, license plates have various locations and rotation angles in an image. In the license plate detection phase, the magnitude of the vertical gradients is used to detect candidate license plate regions. These candidate regions are then evaluated based on three geometrical features: the ratio of width and height, the size and the orientation. The last feature is defined by the major axis. In the character recognition phase, we must detect character features that are non-sensitive to the rotation variations. The various rotated character images of a specific character can be normalized to the same orientation based on the major axis of the character image. The crossing counts and peripheral background area of an input character image are selected as the features for rotation-free character recognition. Experimental results show that the license plates detection method can correctly extract all license plates from 102 car images taken outdoors and the rotation-free character recognition method can achieve an accuracy rate of 98.6%. <s> BIB001 </s> Survey on Automatic Vehicle Number Plate Localization <s> Detection of License Plate Using Edge Information <s> In this paper we detect the number on vehicle plates in the input image. We use simple color conversion ,edge detection and connector measurement technique. Throughout the whole work, we use masking and smoothing operation. The median filter is used as one of the operators. The best results can be obtained by getting the value of connector components more than 17.Various other methods have been proposed so far but we here present simplest of all and with lesser complexity to detect the numbers. The image is stored in the form of a matrix and the output is displayed in the form of detected numbers. The crux of the work is to use Sobel Edge Detection technique. We present a detection algorithm that employs a novel image descriptor and detects license plate. Using covariance matrix for the same purpose to find out statistical and spatial properties could lead to complexity(3) that arises due to neural network. Instead of these methods and techniques, we have used filter convolution and masking operation to detect the number out of the image(as in vehicle's number plate).To minimize complexity( as in covariance matrix), local variance scores have been used and the unique coefficients have been restructured into a feature vector form and multi-layer neural network(3- 4),(8). Since no explicit similarity or distance computation is required in this framework, it is very easily possible to keep the computational load of the detection process low. Moreover, the complexity involved is very less as compared to that in template matching done by using genetic algorithms(5-6)and neural networks. In the current work, first of all the input image is converted into its corresponding RGB format (2)and appropriate filters are applied onto it. In order to smoothen the edges, the technique of convolution is used. Thereafter, the connected components (mx) are detected. The crux of the work lies in extracting exactly all the characters in the number plate when the number of connected components are more than number 17. Finally, the image is stored in the form of a matrix and the output is displayed in the form of detected numbers. <s> BIB002 </s> Survey on Automatic Vehicle Number Plate Localization <s> Detection of License Plate Using Edge Information <s> Automatic License Plate Recognition (ALPR) is a challenging area of research due to its importance to variety of commercial applications. ALPR systems are widely implemented for automatic ticketing of vehicles at car parking area, tracking vehicles during traffic signal violations and related applications with huge saving of human energy and cost. The overall problem may be subdivided into three distinct key modules: (a) localization of license plate from vehicle image, (b) segmentation of the characters within the license plate and (c) recognition of segmented characters within the license plate. The main function of the module (a) is to find out the potential regions within the image that may contain the license plate. The function of module (b) is to isolate the foreground characters from the background within the detected license plate region. And the function of the module (c) is to recognize the segments in terms of known characters or digits. Though modules (b) and (c) employ most of the traditional methods available to the technologists, module (a) i.e. localization of potential license plate regions(s) from vehicle images is the most challenging task due to the huge variations in size, shape, color, texture and spatial orientations of license plate regions in such images. In general, objective of any ALPR system is to localize potential license plate region(s) from the vehicle images captured through a road-side camera and interpret the segmented characters present therein using an Optical Character Recognition (OCR) system, to get the license number of the vehicle. Again, an ALPR system can have two varieties: on-line ALPR system and off-line ALPR system. In an online ALPR system, the localization and interpretation of license plates take place instantaneously from the incoming video frames, enabling real-time tracking of moving vehicles through the surveillance camera. On the other hand, an offline ALPR system captures the vehicle images and stores them in a centralized data server for further processing, i.e. for interpretation of vehicle license plates. The objective of the current work falls under the second category of ALPR system. In this work, real time vehicle images are captured from a road-side surveillance camera automatically throughout day and night. The images are stored in a centralized data server. A never ending process takes the stored images sequentially and interprets the license number of the vehicle. An innovative idea using statistical distribution of the vertical edges is used for localization of license plate, connected component labeling is used for segmentation of the characters and template matching using an innovative matching technique is used for recognition of the characters. The performance of the system is measured at the three levels, i.e. localization level, segmentation level and recognition level and the result seems to be quite satisfactory. <s> BIB003 </s> Survey on Automatic Vehicle Number Plate Localization <s> Detection of License Plate Using Edge Information <s> In this paper, we present a new design flow for robust license plate localization and recognition. The algorithm consists of three stages: 1) license plate localization; 2) character segmentation; and 3) feature extraction and character recognition. The algorithm uses Mexican hat operator for edge detection and Euler number of a binary image for identifying the license plate region. A pre-processing step using median filter and contrast enhancement is employed to improve the character segmentation performance in case of low resolution and blur images. A unique feature vector comprised of region properties, projection data and reflection symmetry coefficient has been proposed. Back propagation artificial neural network classifier has been used to train and test the neural network based on the extracted feature. A thorough testing of algorithm is performed on a database with varying test cases in terms of illumination and different plate conditions. Practical considerations like existence of another text block in an image, presence of dirt or shadow on or near license plate region, license plate with rows of characters and sensitivity to license plate dimensions have been addressed. The results are encouraging with success rate of 98.10% for license plate localization and 97.05% for character recognition. <s> BIB004
In this method the license plate is detected using the edge detection techniques. This method detects the license plate characters from the car images taken from various outdoors. Initially the preprocessing is done using median filter. The magnitude of the vertical gradients is used to detect the plate region BIB001 . The sobel operator is used for edge detection BIB003 which is computationally inexpensive and reasonably robust to noise. The following figure shows the sobel operator BIB002 . After detecting edges in the image, the required area is evaluated based on the geometrical features such as the ratio of width and height, size and orientation. The ratio of width to the height is also known as aspect ratio . The following figure 5 shows different stages of detecting license plate using sobel edge detection technique. The standard aspect ratio of the number plates for car images is 1 to 2 for multi line character set and between 3 and 6.5 for single line character set. In BIB004 Mexican hat operator is used to detect the edges which perform smoothing before extracting edges. This method is simple and straight forward for detecting plate region. Even though it is reasonably robust to noise, cannot produce good results for the images with complex background.
Survey on Automatic Vehicle Number Plate Localization <s> Method Principle Pros <s> This paper proposes an approach to developing an automatic license plate recognition system. Car images are taken from various positions outdoors. Because of the variations of angles from the camera to the car, license plates have various locations and rotation angles in an image. In the license plate detection phase, the magnitude of the vertical gradients is used to detect candidate license plate regions. These candidate regions are then evaluated based on three geometrical features: the ratio of width and height, the size and the orientation. The last feature is defined by the major axis. In the character recognition phase, we must detect character features that are non-sensitive to the rotation variations. The various rotated character images of a specific character can be normalized to the same orientation based on the major axis of the character image. The crossing counts and peripheral background area of an input character image are selected as the features for rotation-free character recognition. Experimental results show that the license plates detection method can correctly extract all license plates from 102 car images taken outdoors and the rotation-free character recognition method can achieve an accuracy rate of 98.6%. <s> BIB001 </s> Survey on Automatic Vehicle Number Plate Localization <s> Method Principle Pros <s> Detecting the region of a license plate is the key component of the vehicle license plate recognition (VLPR) system. A new method is adopted in this paper to analyze road images which often contain vehicles and extract LP from natural properties by finding vertical and horizontal edges from vehicle region. The proposed vehicle license plate detection (VLPD) method consists of three main stages: (1) a novel adaptive image segmentation technique named as sliding concentric windows (SCWs) used for detecting candidate region; (2) color verification for candidate region by using HSI color model on the basis of using hue and intensity in HSI color model verifying green and yellow LP and white LP, respectively; and (3) finally, decomposing candidate region which contains predetermined LP alphanumeric character by using position histogram to verify and detect vehicle license plate (VLP) region. In the proposed method, input vehicle images are commuted into grey images. Then the candidate regions are found by sliding concentric windows. We detect VLP region which contains predetermined LP color by using HSI color model and LP alphanumeric character by using position histogram. Experimental results show that the proposed method is very effective in coping with different conditions such as poor illumination, varied distances from the vehicle and varied weather. <s> BIB002
Using edge information BIB001 [8] The area around the plate region is rectangular. Easy and straightforward. Using morphology [12] The shape of the plate. Easy to implement. Using sliding concentric windows BIB002 [14]
Survey on Automatic Vehicle Number Plate Localization <s> CONCLUSION & FUTURE WORK <s> This paper presents an approach to license plate localization and recognition. A proposed method is designed to perform recognition of any kind of license plates under any environmental conditions. The main assumption of this method is the ability of recognition of all license plates which can be found in an individual picture. To solve the problem of localization of a license plate two independent methods are used. The first one was based on the connected components analysis and the second one searches for the “signature” of the license plate at the image. Segmentation of characters is performed by using vertical projection of license plate’s image. However, a simple neural network is used to recognize them. Finally, to separate correct license plates from other captions in the picture, during the license plate recognition process, a syntax analysis is used. The proposed approach is discussed together with results obtained on a benchmark data set of license plate pictures. In this paper examples of correct and incorrect results are also presented, as well as possible practical applications of proposed method. <s> BIB001 </s> Survey on Automatic Vehicle Number Plate Localization <s> CONCLUSION & FUTURE WORK <s> Automatic license plate recognition (LPR) plays an important role in numerous applications and a number of techniques have been proposed. However, most of them worked under restricted conditions, such as fixed illumination, limited vehicle speed, designated routes, and stationary backgrounds. In this study, as few constraints as possible on the working environment are considered. The proposed LPR technique consists of two main modules: a license plate locating module and a license number identification module. The former characterized by fuzzy disciplines attempts to extract license plates from an input image, while the latter conceptualized in terms of neural subjects aims to identify the number present in a license plate. Experiments have been conducted for the respective modules. In the experiment on locating license plates, 1088 images taken from various scenes and under different conditions were employed. Of which, 23 images have been failed to locate the license plates present in the images; the license plate location rate of success is 97.9%. In the experiment on identifying license number, 1065 images, from which license plates have been successfully located, were used. Of which, 47 images have been failed to identify the numbers of the license plates located in the images; the identification rate of success is 95.6%. Combining the above two rates, the overall rate of success for our LPR algorithm is 93.7%. <s> BIB002 </s> Survey on Automatic Vehicle Number Plate Localization <s> CONCLUSION & FUTURE WORK <s> etection of vehicle license plate is vital for identifying the vehicle because the license plate has unique information for each vehicle. However, in India, vehicle license plate standards, though they exist, are rarely practiced. Large amount of variations are seen in the parameters of license plate like size of number plate, its location, background and foreground color, etc. which makes the task of number plate localization for recognition more difficult. This paper presents a Wavelet analysis based methodology for precise localization of Indian number plates. <s> BIB003 </s> Survey on Automatic Vehicle Number Plate Localization <s> CONCLUSION & FUTURE WORK <s> This paper presents a solution for the license plate recognition problem in residential community administrations in China. License plate images are pre-processed through gradation, middle value filters and edge detection. In the license plate localization module the number of edge points, the length of license plate area and the number of each line of edge points are used for localization. In the recognition module, the paper applies a statistical character method combined with a structure character method to obtain the characters. In addition, more models and template library for the characters which have less difference between each other are built. A character classifier is designed and a fuzzy recognition method is proposed based on the fuzzy decision-making method. Experiments show that the recognition accuracy rate is up to 92%. <s> BIB004
In this study various methods for license plate localization of vehicle images is presented. With rapid development of transportation technology, monitoring of vehicles is necessary for various purposes. The main step in any recognition system of this field is localization or detection of number plate of vehicle images hence the overall correctness of the system depends on it. Due to various environmental conditions and independent standards among the countries, localization of the number plate is a challenging problem in the world. Using connected components BIB001 May generate broken objects Need one or two co-operating methods. Using wavelets BIB003 Not directly suitable for skewed plate Skew correction needed. Using color information BIB002 Sensitive to lighting conditions Soft computing techniques needed. Using texture BIB004 Computationally complex for complex background images
Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> I. INTRODUCTION <s> We report the integration of a hybrid silicon evanescent waveguide photodetector with a hybrid silicon evanescent optical amplifier. The device operates at 1550 nm with a responsivity of 5.7 A/W and a receiver sensitivity of -17.5 dBm at 2.5 Gb/s. The transition between the passive silicon waveguide and the hybrid waveguide of the amplifier is tapered to increase coupling efficiency and to minimize reflections. <s> BIB001 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> I. INTRODUCTION <s> This article advocates the use of short-range wireless communication inside a computing chassis. Ultrawideband links make it possible to design a within-chassis wireless interconnect. In contrast to conventional, fixed, wireline connections between chips, wireless communications offer certain unique advantages, as the authors explain. <s> BIB002 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> I. INTRODUCTION <s> The wireless harness is a new emerging short-range communication to replace wire harness implemented in devices with wireless communications between internal components. In such an in-device wireless harness, the communication distance is up to a couple of meters at most. Through the challenge to apply this wireless communication technology to information and communication technology (ICT) equipment, we found that the radio channel inside the ICT equipment deeply depends on its internal structure more than we expected. In order to understand the radio propagation characteristics inside such equipment, we propose a new modeling technique with using a frequency- dependent path loss exponent expressing the near- and far-field propagation, which enables us to successfully extract attenuation factors for the frequency and the propagation distance from the measured data. The results shows the path loss characteristics can be divided into three regions; line-of-sight (LOS), non-line-of-sight (NLOS), and a transition range. The transition range, which appears between the LOS and the NLOS, is caused by a blocking due to the densely-packaged internal components. These findings and equations can be criteria to design a wireless harness communication link. <s> BIB003 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> I. INTRODUCTION <s> Conventional datacenters, based on wired networks, entail high wiring costs, suffer from performance bottlenecks, and have low resilience to network failures. In this paper, we investigate a radically new methodology for building wire-free datacenters based on emerging 60GHz RF technology. We propose a novel rack design and a resulting network topology inspired by Cayley graphs that provide a dense interconnect. Our exploration of the resulting design space shows that wireless datacenters built with this methodology can potentially attain higher aggregate bandwidth, lower latency, and substantially higher fault tolerance than a conventional wired datacenter while improving ease of construction and maintenance. <s> BIB004 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> I. INTRODUCTION <s> Current commercial systems-on-chips (SoCs) designs integrate an increasingly large number of predesigned cores and their number is predicted to increase significantly in the near future. For example, molecular-scale computing promises single or even multiple order-of-magnitude improvements in device densities. The network-on-chip (NoC) is an enabling technology for integration of large numbers of embedded cores on a single die. The existing method of implementing a NoC with planar metal interconnects is deficient due to high latency and significant power consumption arising out of long multi-hop links used in data exchange. The latency, power consumption and interconnect routing problems of conventional NoCs can be addressed by replacing or augmenting multi-hop wired paths with high-bandwidth single-hop long-range wireless links. This opens up new opportunities for detailed investigations into the design of wireless NoCs (WiNoCs) with on-chip antennas, suitable transceivers and routers. Moreover, as it is an emerging technology, the on-chip wireless links also need to overcome significant challenges pertaining to reliable integration. In this paper, we present various challenges and emerging solutions regarding the design of an efficient and reliable WiNoC architecture. <s> BIB005 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> I. INTRODUCTION <s> Effective manipulation of cavity resonant modes is crucial for emission control in laser physics and applications. Using the concept of parity-time symmetry to exploit the interplay between gain and loss (i.e., light amplification and absorption), we demonstrate a parity-time symmetry–breaking laser with resonant modes that can be controlled at will. In contrast to conventional ring cavity lasers with multiple competing modes, our parity-time microring laser exhibits intrinsic single-mode lasing regardless of the gain spectral bandwidth. Thresholdless parity-time symmetry breaking due to the rotationally symmetric structure leads to stable single-mode operation with the selective whispering-gallery mode order. Exploration of parity-time symmetry in laser physics may open a door to next-generation optoelectronic devices for optical communications and computing. <s> BIB006 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> I. INTRODUCTION <s> Hybrid silicon lasers based on bonded III–V layers on silicon are currently the best contenders for on-chip lasers for silicon photonics. On-chip silicon light sources are highly desired for use as electrical-to-optical converters in silicon-based photonics. Zhiping Zhou and Bing Yin of Peking University in China and Jurgen Michel of Massachusetts Institute of Technology assess the three main contenders for such light sources: erbium-based light sources, germanium-on-silicon lasers and III-V-based silicon lasers. They consider operating wavelength, pumping conditions, power consumption, thermal stability and fabrication process. The scientists regard the power efficiencies of electrically pumped erbium-based lasers as being too low and the threshold currents of germanium lasers as being too high. They conclude that III–V quantum dot lasers monolithically grown on silicon show the most promise for realizing on-chip lasers. <s> BIB007 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> I. INTRODUCTION <s> Integration of 2D semiconductor optoelectronics with silicon photonics opens a new path for on-chip point-to-point optical communications. <s> BIB008 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> I. INTRODUCTION <s> A 4-channel silicon photonics transceiver array for Point-to-Point (P2P) fiber-to-the-home (FTTH) optical networks at the central office (CO) side is demonstrated. A III-V O-band photodetector array was integrated onto the silicon photonic transmitter through transfer printing technology, showing a polarization-independent responsivity of 0.39 - 0.49 A/W in the O-band. The integrated PDs (30 × 40 μm2 mesa) have a 3 dB bandwidth of 11.5 GHz at −3 V bias. Together with high-speed C-band silicon ring modulators whose bandwidth is up to 15 GHz, operation of the transceiver array at 10 Gbit/s is demonstrated. The use of transfer printing for the integration of the III-V photodetectors allows for an efficient use of III-V material and enables the scalable integration of III-V devices on silicon photonics wafers, thereby reducing their cost. <s> BIB009 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> I. INTRODUCTION <s> In this paper, metallic plasmonic nano-antennas are modeled and analyzed for wireless optical communication. More specifically, a unified mathematical framework is developed to investigate the performance in transmission and reception of metallic nano-dipole antennas. This framework takes into account the metal properties, i.e., its dynamic complex conductivity and permittivity; the propagation properties of surface plasmon polariton waves on the nano-antenna, i.e., their confinement factor and propagation length; and the antenna geometry, i.e., length and radius. The generated plasmonic current in reception and the total radiated power and efficiency in transmission are analytically derived by utilizing the framework. In addition to numerical results, the analytical models are validated by means of simulations with COMSOL Multi-physics. The developed framework will guide the design and development of novel nano-antennas suited for wireless optical communication. <s> BIB010 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> I. INTRODUCTION <s> Integrating photonics with advanced electronics leverages transistor performance, process fidelity and package integration, to enable a new class of systems-on-a-chip for a variety of applications ranging from computing and communications to sensing and imaging. Monolithic silicon photonics is a promising solution to meet the energy efficiency, sensitivity, and cost requirements of these applications. In this review paper, we take a comprehensive view of the performance of the silicon-photonic technologies developed to date for photonic interconnect applications. We also present the latest performance and results of our "zero-change" silicon photonics platforms in 45 nm and 32 nm SOI CMOS. The results indicate that the 45 nm and 32 nm processes provide a "sweet-spot" for adding photonic capability and enhancing integrated system applications beyond the Moore-scaling, while being able to offload major communication tasks from more deeply-scaled compute and memory chips without complicated 3D integration approaches. <s> BIB011 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> I. INTRODUCTION <s> The field of terahertz integrated technology has undergone significant development in the past ten years. This has included work on different substrate technologies such as III–V semiconductors and silicon, work on field-effect transistor devices and heterojunction bipolar devices, and work on both fully electronic and hybrid electronic–photonic systems. While approaches in electronic and photonics can often seem distinct, techniques have blended in the terahertz frequency range and many emerging systems can be classified as photonics-inspired or hybrid. Here, we review the development of terahertz integrated electronic and hybrid electronic–photonic systems, examining, in particular, advances that deliver important functionalities for applications in communication, sensing and imaging. Many of the advances in integrated systems have emerged, not from improvements in single devices, but rather from new architectures that are multifunctional and reconfigurable and break the trade-offs of classical approaches to electronic system design. We thus focus on these approaches to capture the diversity of techniques and methodologies in the field. This Review Article examines the development of terahertz integrated electronic and hybrid electronic–photonic systems, considering, in particular, advances that deliver important functionalities for applications in communication, sensing and imaging. <s> BIB012 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> I. INTRODUCTION <s> Data access patterns that involve fine-grained sharing, multicasts, or reductions have proved to be hard to scale in shared-memory platforms. Recently, wireless on-chip communication has been proposed as a solution to this problem, but a previous architecture has used it only to speed-up synchronization. An intriguing question is whether wireless communication can be widely effective for ordinary shared data. This paper presents Replica, a manycore that uses wireless communication for communication-intensive ordinary data. To deliver high performance, Replica supports an adaptive wireless protocol and selective message dropping. We describe the computational patterns that leverage wireless communication, programming techniques to restructure applications, and tools that help with automation. Our results show that wireless communication is effective for ordinary data. For 64 cores, Replica obtains a mean speed-up of 1.76x over a conventional machine. The mean speed-up reaches 1.89x if approximate-computing transformations are enabled. The average energy consumption is substantially reduced by 34% (or 38% with approximate transformations), and the area increases only modestly. <s> BIB013
Constant downscaling of Radio-Frequency (RF) and optical circuits has recently opened the door to the design of transceivers and antennas that can be integrated within CMOS chips - BIB011 . Although higher integration was initially driven by a need to lower fabrication costs, recent times have seen the emergence of new wireless applications where the size of the RF front-end plays a critical role. These applications are enabled by advances in nanotechnology that continue to push the limits of miniaturization, leading to very compact wireless systems in the millimeter-wave (mmWave) (30-300 GHz), Terahertz (THz) (0.3-3 THz) and The associate editor coordinating the review of this manuscript and approving it for publication was Franco Fuschini . optical (infrared, 187-400 THz/750-1600 nm, and visible, 400-770 THz/390-750 nm) bands. RF technology has indeed reached a point where tens or even hundreds of transceivers and antennas can be integrated within a computing system. This allows to establish wireless links between the modules within a data center BIB004 , , the different components of a printer BIB003 or a desktop computer BIB002 , and even the processors and memory within a single chip BIB013 . In the extreme downscaling cases, the Wireless Network-on-Chip (WNoC) paradigm stands out BIB005 where low-latency broadcast-capable chip-scale links are established to distribute data shared among the processor cores of a multiprocessor. Beyond RF technology, major progress in the field of integrated silicon photonics - BIB008 has similarly led to miniature lasers BIB007 , BIB006 , photodetectors BIB001 , BIB009 and optical antennas - BIB010 that enable on-and off-chip optical wireless interconnects. As an intermediate step between RF and optics, the THz band is also being considered as an enabling technology for WNoC. While the technology is not as mature as RF or silicon photonics, the THz technology gap is progressively being closed through complementary electronic, photonic and plasmonic approaches BIB012 - . Obviously, the adoption of wireless communications in such highly integrated environments poses significant challenges in diverse aspects including transceiver front-end integration, optimal antenna placement, interference management, or data modulation and coding, in addition to protocol design. Furthermore, such aspects are highly dependent on the chosen frequency of operation. For example, the much longer communication distance of RF links and their intrinsic ability to support information broadcasting comes with the cost of higher multi-user interference and lower data-rates, whereas the higher data-rate of optical links comes with the cost of challenging broad-and multi-casting and the need for relaying. Interestingly, beyond the technology, many of these aspects arise from the nature of the wireless channel. Surprisingly, though, a proper characterization of the wireless channel at the chip scale and across the spectrum is missing. Several fundamental differences between traditional wireless networking scenarios and WNoC motivate a tailored study of wave propagation and channel modeling . Compared to the majority of wireless networking scenarios, in which usually the communication nodes and/or the scenario are mobile or change through time, in WNoC, the entire communication environment is static. As a result, the channel can be deterministically characterized and then utilized to guide the design of optimized communication solutions. For example, waveforms can designed to overcome the fixed frequency-selective response resulting from multi-path propagation through lossy medium, or spatial multiplexing strategies can be designed to minimize multi-user interference. In this paper, we perform a comprehensive survey of existing channel modeling efforts at the chip scale. We first review the most common chip package environments in single and multi-chip configurations and the fundamental electromagnetics at the chip scale in Section II. The common methodologies and approaches to channel modeling for chip-scale networks are reviewed in Section III. Then, we survey the different works in the literature that have been aimed at studying wireless channel at the chip scales, differentiating between mmWave approaches in Section IV, THz band efforts in Section V, and optical-wireless models in Section VI. Then, we present a summary of main challenges and prospects for the field of channel characterization in chip-scale environments in Section VII and finally conclude the paper in Section VIII.
Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> B. ENVIRONMENT DESCRIPTION <s> This paper describes yield, contact resistance, and preliminary reliability test results on micro-bump C4 interconnects in modules containing Si-chips and Si-carriers. Modules containing eutectic PbSn or SnCu bump solders were fabricated with high yield, with similar interconnect contact resistances for both solders. The contact resistance and reliability test results to date suggest that reliable, high-current, high-density bump interconnections can be achieved for Si-carrier technology. <s> BIB001 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> B. ENVIRONMENT DESCRIPTION <s> In this paper we present the results of an extensive ultra-wideband (UWB) measurement campaign performed inside the chassis of two desktop computers. The purpose of the campaign is to analyze the possibility of board-to-board communications, replacing cable connections. Measurements of the propagation channel are performed over a frequency range of 3.1 - 10.6 GHz using a vector network analyzer and antennas small enough to enable integration on a circuit board. The results show that the propagation environment is very uniform, with small variations in the path gain between different positions within a computer. We also performed interference measurements, showing that the interference is restricted to certain subbands. <s> BIB002 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> B. ENVIRONMENT DESCRIPTION <s> This article advocates the use of short-range wireless communication inside a computing chassis. Ultrawideband links make it possible to design a within-chassis wireless interconnect. In contrast to conventional, fixed, wireline connections between chips, wireless communications offer certain unique advantages, as the authors explain. <s> BIB003 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> B. ENVIRONMENT DESCRIPTION <s> This paper presents the design of a 60 GHz antenna to be used in multi-core multi-chip (MCMC) computing systems. The antenna in package (AiP) solution has a ground-shielded metal and a periodically-patched artificial magnetic conductor (AMC) structure to widen the reflection coefficient bandwidth. The designed antenna with AMC layer broadcasts signals in the horizontal direction. Both simulated and measured results demonstrate that a $-10~{\hbox {dB}}$ reflection coefficient is achieved for a 10 GHz bandwidth and that radiation in the horizontal (chip-to-chip) direction is maintained. <s> BIB004 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> B. ENVIRONMENT DESCRIPTION <s> A 60 GHz wireless communication system has been proposed as a replacement for data cables in a data-center cabinet in order to reduce the significant cooling energy costs. This paper investigates the feasibility of placing adaptive absorber-reflector regions on the sides of a cabinet as a static channel fading counter-measure and also to optimize performance for a particular server. As a result of a ray-tracing simulation it is shown that the capacity could be increased by 2.8 bits/s/Hz at an SNR of 20 dB at the median value. <s> BIB005 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> B. ENVIRONMENT DESCRIPTION <s> This paper presents the channel measurements performed within a closed metal cabinet at 60 GHz covering the frequency range 57–62 GHz. Two different volumes of an empty metal cupboard are considered to emulate the environment of interest (an industrial machine). Furthermore, we have considered a number of scenarios such as line of sight, non line of sight, and placing absorbers. A statistical channel model is provided to aid short-range wireless link design within such a reflective and confined environment. Based on the measurements, the large- and small-scale parameters are extracted and fitted using the standard log-normal and Saleh–Valenzuela models, respectively. The obtained results are characterized by a very small path loss exponent, a single cluster phenomenon, and a significantly large root-mean-square (RMS) delay spread. The results show that covering a wall with absorber material dramatically reduces the RMS delay spread. Finally, the proposed channel model is validated by comparing the measured channel with a simulated channel, where the simulated channel is generated from the extracted parameters. <s> BIB006 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> B. ENVIRONMENT DESCRIPTION <s> Historically, improvements in GPU-based high performance computing have been tightly coupled to transistor scaling. As Moore's law slows down, and the number of transistors per die no longer grows at historical rates, the performance curve of single monolithic GPUs will ultimately plateau. However, the need for higher performing GPUs continues to exist in many domains. To address this need, in this paper we demonstrate that package-level integration of multiple GPU modules to build larger logical GPUs can enable continuous performance scaling beyond Moore's law. Specifically, we propose partitioning GPUs into easily manufacturable basic GPU Modules (GPMs), and integrating them on package using high bandwidth and power efficient signaling technologies. We lay out the details and evaluate the feasibility of a basic Multi-Chip-Module GPU (MCM-GPU) design. We then propose three architectural optimizations that significantly improve GPM data locality and minimize the sensitivity on inter-GPM bandwidth. Our evaluation shows that the optimized MCM-GPU achieves 22.8% speedup and 5x inter-GPM bandwidth reduction when compared to the basic MCM-GPU architecture. Most importantly, the optimized MCM-GPU design is 45.5% faster than the largest implementable monolithic GPU, and performs within 10% of a hypothetical (and unbuildable) monolithic GPU. Lastly we show that our optimized MCM-GPU is 26.8% faster than an equally equipped Multi-GPU system with the same total number of SMs and DRAM bandwidth. <s> BIB007 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> B. ENVIRONMENT DESCRIPTION <s> Demand for increasing performance is far outpacing the capability of traditional methods for performance scaling. Disruptive solutions are needed to advance beyond incremental improvements. Traditionally, processors reside inside packages to enable PCB-based integration. We argue that packages reduce the potential memory bandwidth of a processor by at least one order of magnitude, allowable thermal design power (TDP) by up to 70%, and area efficiency by a factor of 5 to 18. Further, silicon chips have scaled well while packages have not. We propose packageless processors - processors where packages have been removed and dies directly mounted on a silicon board using a novel integration technology, Silicon Interconnection Fabric (Si-IF). We show that Si-IF-based packageless processors outperform their packaged counterparts by up to 58% (16% average), 136%(103% average), and 295% (80% average) due to increased memory bandwidth, increased allowable TDP, and reduced area respectively. We also extend the concept of packageless processing to the entire processor and memory system, where the area footprint reduction was up to 76%. <s> BIB008 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> B. ENVIRONMENT DESCRIPTION <s> In this paper a measurement campaign in a real Data Centre at 300 GHz and recent results are presented. The measurements are performed with a UWB sub-mmWave channel sounder and classified in general characterisation, top-of-rack and intra-rack measurements. The individual measurement setups as well as the methodology are explained. In a first step, the measurements are evaluated regarding the path attenuation, the power delay profile (PDP) and the power angular spectrum (PAS). The PDP as well as the PAS give comprehensible results, which are explained by the scenario’s geometry. The path attenuation shows reasonable results compared to the free space path loss and demonstrates that wireless communication at 300 GHz in a Data Centre is possible. <s> BIB009 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> B. ENVIRONMENT DESCRIPTION <s> This paper presents the characterization of Terahertz (THz) wireless channel inside a desktop size metal box with focus on line-of-sight (LoS) and reflected-non-line-of-sight (RNloS) propagation. Measurements for LoS propagation inside the metal box show that path loss varies with respect to the transceiver’s height from the bottom wall, and for some heights, the path loss is lower than the free space value. By analyzing the relationship between the path loss and the antenna’s height, the results show that the first six modes of TE mode dominate the resonating modes inside the box. Also, the path loss analysis indicates that the resonating modes combined with the reflections happened inside the box should be responsible for the strong ripples on the path loss curve. Finally, the RNLoS measurements with dual-in-line-memory-module (DIMM) as the reflecting surface show that the differences between the average path losses measured inside the metal box and in free space are limited to 1 dB. <s> BIB010
Diverse possible architectures and environments of chip-scale wireless communications are surveyed as follows. First, without accounting for packaging, propagation occurs in two regions: (i) the intra-chip region, in which the waves radiated by the antenna travel through several layers of the chip; and (ii) the inter-chip region, in which the waves that have left the chip travel through the inter-chip space until they reach the boundaries of another chip. The layers and materials most relevant to propagation in both regions will eventually depend on the antenna position, frequency band, and choice of package. Second, multi-chip integration alternatives need to be considered as they impact inter-chip propagation. Currently, the integration of multiple chips can occur both vertically and horizontally. The former, 3D integration, consists on the stacking of thinned-down chips . Once stacked, the chips are generally interconnected through a forest of vertical Through-Silicon Vias (TSVs) with very fine pitch as shown in Figure 2 (a). This provides a huge bandwidth density and efficiency, yet at the cost of heat dissipation issues and low available interconnect area. The 2.5D integration, instead, takes a co-planar approach and interconnects chips through a common platform , BIB007 , generally either a silicon Such an arrangement alleviates the heat dissipation issue of 3D stacking and also increases the available area, as the limit is now set by the interposer or the system substrate. Due to the coarser pitch, the solution is cheaper but offers less interconnect bandwidth. Third, system-level packaging is tightly coupled to the multi-chip integration scheme and also impacts the inter-chip propagation. The lateral space among chips may be filled with materials providing mechanical stability and the complete system may be enclosed with a common package lid or a metallic heat sink, for better thermal performance . Fourth, it is crucial to understand packaging options at the chip level as they are relevant to the intra-chip propagation. Traditionally, flip-chip packaging and wire bonding have been the most common, although multiple custom variants and alternatives exist depending on the final application BIB008 , BIB004 . Flip-chip packaging is generally preferred in the multiprocessor context due to its lower inductance and higher power/bandwidth density BIB001 . In this configuration, chips are turned over and carefully connected to the system substrate or interposer through a set of solder bumps. The packaged chip then takes the canonical form presented in Fig. 3 , with the system heat sink and spreader material on top and a low-resistivity silicon substrate. The chip metallization layers are surrounded by an insulator, often silicon dioxide, which is located below the silicon . In wire bonding, the insulator is left facing up (open chip) and connected to the underlying package with bond wires. It is worth noting that the environment description given above is amenable to other similar scenarios such as embedded systems for intelligent metasurfaces , but not generalizable to all wireless scenarios within computing systems. For instance, several works have proposed models for the mmWave wireless channel within a rack BIB005 , between racks [37]- BIB009 or at the cabinet scale BIB006 in data centers. Some papers have also covered the channel at the motherboard scale in desktops or laptops BIB003 , BIB002 - BIB010 . Although there are structural resemblances with the wireless chip-scale environment, such models are not directly applicable here due to substantial differences in dimensions, materials, and antenna placement restrictions.
Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> 1) ANTENNAS <s> We present several on-chip antenna structures that may be fabricated with standard CMOS technology for use at millimeter wave frequencies. On-chip antennas for wireless personal area networks (WPANs) promise to reduce interconnection losses and greatly reduce wireless transceiver costs, while providing unprecedented flexibility for device manufacturers. We present the current state of research in on-chip integrated antennas, highlight several pitfalls and challenges for on-chip design, modeling, and measurement, and propose several antenna structures that derive from the microwave and HF communication fields. We also describe an experimental test apparatus for performing measurements on RFIC systems with on-chip antennas at The University of Texas at Austin. <s> BIB001 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> 1) ANTENNAS <s> Optical waveguide interconnects are a major component of chip-scale data processing and computational systems. Here, we propose an alternative mechanism based on optical wireless broadcasting links using nanoantennas, which may overcome some of the limitations of nanoscale waveguide interconnects. By properly loading and matching nanoantenna pairs with optical nanocircuits, we theoretically demonstrate a complete optical wireless link that, in spite of some radiation loss and mismatch factors, may exhibit much less absorption loss, largely outperforming regular plasmonic waveguide links. <s> BIB002
The miniaturization of the largest dimension of an antenna to meet the chip-scale size requirements imposes the use of very high communication frequencies , BIB001 . In broad terms, an antenna becomes resonant at a frequency at which its length corresponds to half of the wavelength. For example, a 1-mm-long antenna is expected to resonate at approximately 150 GHz, whereas a 150-µm-long antenna would do so at 1 THz. Moreover, while traditionally it had been very challenging to fabricate structures with sub-micrometric dimensions and nanometric precision, major progress in nanotechnologies has enabled the development of precise structures comparable to the optical wavelengths and, thus, enabled for the first time the fabrication of antennas that allow us to control the radiation of light in a similar way as traditional antennas for lower frequencies have done - BIB002 . Other aspects, both related to the antenna building material (e.g., conductivity) as well as their geometry (e.g., the proximity of a ground plane), further factor in the design of the antennas -especially in an environment as highly integrated as WNoC. Table 1 summarizes the main characteristics of common on-chip antennas for free-space applications that have been proposed for its use in the inter-/intra-chip communications domain. Moving to higher frequencies usually opens the door to also communicating over much larger bandwidths. VOLUME 8, 2020 Traditional narrow-band antenna designs (e.g., dipole and patch antennas) commonly exhibit a bandwidth approaching 1% of their resonant frequency. Therefore, a few GHz of bandwidth are expected for a mmWave antenna and communication system, whereas tens or hundreds of GHz are supported by a THz antenna and even more by an optical nano-antenna. Moreover, ultra-broadband antenna designs (e.g., bowtie, lognormal, spiral) offer bandwidths in excess of 10% of the carrier frequency. However, again in broad terms, the effective area of an antenna, i.e., its ability to capture the power of an incoming propagating electromagnetic wave into an electrical current in reception, decreases quadratically with the signal wavelength. Interestingly, despite being an antenna-only phenomenon, this result is in fact many times inaccurately described as a propagation loss and captured in the so-called free-space path-loss or Friis equation. There are ways to increase the effective area of an antenna, for example, by increasing its size directly (e.g., utilizing a length which is an odd multiple of half wavelength) or indirectly (e.g., through lenses or reflectors), but such increase in effective area, besides going against the original design criteria, i.e., the miniaturization of the antenna, leads to an increase in the antenna directivity, i.e., the antenna will only efficiently receive an incoming EM wave in a given direction. This again has traditionally been inaccurately reported in many works by implying that ''higher frequency antennas are always directional.''
Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> To further understand and explain the characteristics of integrated antennas in silicon substrates, an intuitive plane wave model is proposed. The model has been validated by quantitatively explaining the location of dips in the antenna gain versus frequency plots when a glass layer is inserted between a silicon wafer and a metal chuck using the interference effect between two propagating waves. These also convincingly demonstrated that the signal coupling between integrated antennas is due to wave phenomena rather than simple R-C coupling. Experiments have been carried out to characterize integrated dipole antennas. <s> BIB001 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> Inserting an aluminum nitride (AlN) layer which acts as a dielectric propagating medium between a silicon wafer containing integrated antennas and a metal chuck emulating the role of a heat sink improves the antenna power transmission gain by /spl sim/8 dB at 15 GHz. AlN, with its high thermal conductivity, also alleviates the heat removal problem. With a 760-/spl mu/m AlN layer, an on-chip wireless connection is demonstrated over a 2.2-cm distance, which is 3/spl times/ the previously reported separation. <s> BIB002 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> An intrachip wireless interconnect using integrated antennas is demonstrated in a flip-chip ball grid array package. The wireless interconnect consists of a transmitter-receiver pair, which is fabricated in a 0.18-/spl mu/m CMOS process. A 15-GHz signal is generated and broadcasted across the integrated circuit. The signal is picked up by a receiver 4 mm away on the same integrated circuit and frequency divided by eight to produce a 1.875-GHz local clock signal. The interconnection is also demonstrated between a transmitting antenna and a packaged receiver 40 cm away from the transmitting antenna. Demonstration of intrachip wireless interconnects in a package has been considered the ultimate test for this technology. <s> BIB003 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> We present several on-chip antenna structures that may be fabricated with standard CMOS technology for use at millimeter wave frequencies. On-chip antennas for wireless personal area networks (WPANs) promise to reduce interconnection losses and greatly reduce wireless transceiver costs, while providing unprecedented flexibility for device manufacturers. We present the current state of research in on-chip integrated antennas, highlight several pitfalls and challenges for on-chip design, modeling, and measurement, and propose several antenna structures that derive from the microwave and HF communication fields. We also describe an experimental test apparatus for performing measurements on RFIC systems with on-chip antennas at The University of Texas at Austin. <s> BIB004 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> An intra-chip wireless interconnect system using on-chip antennas and ultrawideband (UWB) radios that operates in 22-29 GHz is studied in this paper. The on-chip antennas are meander monopoles of axial length 1 mm in silicon technology. A unique wireless channel is formed between a pair of on-chip transmit and receive antennas. The channel is characterized up to an interconnect distance of 40 mm. The system performance is evaluated in terms of bit-error-rate (BER) under the assumptions of perfect system synchronization and signal corruption from thermal and switching noises. As expected, the system performance degrades with interconnect distance and data rate. It achieves a better BER on the 5-k?.cm Si substrate than that on the 10-?.cm Si substrate. <s> BIB005 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> A 43-GHz wireless inter-chip data link including antennas, transmitters, and receivers is presented. The industry standard bonding wires are exploited to provide high efficiency and low-cost antennas. This type of antennas can provide an efficient horizontal communication which is hard to achieve using conventional on-chip antennas. The system uses binary amplitude shift keying (ASK) modulation to keep the design compact and power efficient. The transmitter includes a differential to single-ended modulator and a two-stage power amplifier (PA). The receiver includes a low-noise amplifier (LNA), pre-amplifiers, envelope detectors (ED), a variable gain amplifier (VGA), and a comparator. The chip is fabricated in 180-nm SiGe BiCMOS technology. With power-efficient transceivers and low-cost high-performance antennas, the implemented inter-chip link achieves bit-error rate (BER) around 10-8 for 6 Gb/s over a distance of 2 cm. The signal-to-noise ratio (SNR) of the recovered signal is about 24 dB with 18 ps of rms jitter. The transmitter and receiver consume 57 mW and 60 mW, respectively, including buffers. The bit energy efficiency excluding test buffers is 17 pJ/bit. The presented work shows the feasibility of a low power high data rate wireless inter-chip data link and wireless heterogeneous multi-chip networks. <s> BIB006 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> Multicore platforms are emerging trends in the design of System-on-Chips (SoCs). Interconnect fabrics for these multicore SoCs play a crucial role in achieving the target performance. The Network-on-Chip (NoC) paradigm has been proposed as a promising solution for designing the interconnect fabric of multicore SoCs. But the performance requirements of NoC infrastructures in future technology nodes cannot be met by relying only on material innovation with traditional scaling. The continuing demand for low-power and high-speed interconnects with technology scaling necessitates looking beyond the conventional planar metal/dielectric-based interconnect infrastructures. Among different possible alternatives, the on-chip wireless communication network is envisioned as a revolutionary methodology, capable of bringing significant performance gains for multicore SoCs. Wireless NoCs (WiNoCs) can be designed by using miniaturized on-chip antennas as an enabling technology. In this paper, we present design methodologies and technology requirements for scalable WiNoC architectures and evaluate their performance. It is demonstrated that WiNoCs outperform their wired counterparts in terms of network throughput and latency, and that energy dissipation improves by orders of magnitude. The performance of the proposed WiNoC is evaluated in presence of various traffic patterns and also compared with other emerging alternative NoCs. <s> BIB007 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> This paper presents the design of a 60 GHz antenna to be used in multi-core multi-chip (MCMC) computing systems. The antenna in package (AiP) solution has a ground-shielded metal and a periodically-patched artificial magnetic conductor (AMC) structure to widen the reflection coefficient bandwidth. The designed antenna with AMC layer broadcasts signals in the horizontal direction. Both simulated and measured results demonstrate that a $-10~{\hbox {dB}}$ reflection coefficient is achieved for a 10 GHz bandwidth and that radiation in the horizontal (chip-to-chip) direction is maintained. <s> BIB008 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> SoC (System on chip) technology has rapidly developed in recent years, stimulating emerging research areas such as investigating the efficacy of wireless network interconnection within a single chip or between multiple chips. However the design of the on-chip antenna faces the challenge of obtaining high radiation efficiency and transmission gain due to conductive loss of the silicon substrate. A new on-chip propagation mechanism of radio waves, which takes advantage of the un-doped silicon layer, is developed in order to overcome this challenge. It was found that by properly designing the dimension of silicon wafer, the un-doped silicon layer is able to act like a waveguide. Most of the energy is directed to the approximately lossless the undoped silicon layer of high resistivity instead of attenuating in the doped silicon substrate or radiating to the air. HFSS modeling and simulation results are provided to show that efficiency, gain and directivity of the on-chip antenna are greatly improved. In addition, this type of antennas can be easily reconfigured, which as a result, makes wireless SoCs with wireless interconnects or even a wireless network on PCB (Printed Circuit Board) possible. <s> BIB009 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> This letter demonstrates the feasibility of applying I/O pins as chip Tx/Rx antennas for implementing wireless inter/intra-chip communications (WIICs). An innovative printed circuit board (PCB) medium is presented as a signal propagation channel, which is specially bounded by a metamaterial electromagnetic wave absorber to improve electromagnetic environment pollution. Presented is a 20.4-GHz WIIC communication system, mainly including a transmitter and a receiver. The bit-error-rate (BER) performance of a coherent binary phase-shift keying interconnect system is evaluated. It is shown that the system performance degrades as the separation distance of the transceivers increases. A data rate of 1 Gb/s with a BER at the level of 10 -5 on the PCB investigated is achieved for the transmitted power of 10 dBm. <s> BIB010 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> We review the current state of the art on antennas for use in wireless networks on chips (WiNoCs) and also provide results on wireless channel characteristics in the WiNoC setting—the latter are largely absent from the literature. We first describe the motivation for constructing these miniature networks, aimed at improving efficiency of future multi-processor integrated circuits. We then discuss the implications for antennas: in addition to the usual antenna parameters for communication links (gain, impedance match, pattern), this includes important structural and multiple-access considerations. After a review of the literature and a summary of published antenna characteristics and future challenges, we present example results for a representative structure to illustrate antenna performance and WiNoC channel characteristics. <s> BIB011 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> We propose a novel antenna design enabled by 3-D printing technology for future wireless intrachip interconnects aiming at applications of multicore architectures and system-on-chips. In our proposed design we use vertical quarter-wavelength monopoles at 160 GHz on a ground plane to avoid low antenna radiation efficiency caused by the silicon substrate. The monopoles are surrounded by a specially designed dielectric property distribution. This additional degree of freedom in design enabled by 3-D printing technology is used to tailor the electromagnetic wave propagation. As a result, the desired wireless link gain is enhanced and the undesired spatial crosstalk is reduced. Simulation results show that the proposed dielectric loading approach improves the desired link gain by 8–15 dB and reduces the crosstalk by 9–23 dB from 155 to 165 GHz. As a proof-of-concept, a 60 GHz prototype is designed, fabricated, and characterized. Our measurement results match the simulation results and demonstrate 10–18 dB improvement of the desired link gain and 10–30 dB reduction in the crosstalk from 55 to 61 GHz. The demonstrated transmission loss of the desired link at a distance of 17 mm is only 15 dB, which is over 10 dB better than the previously reported work. <s> BIB012 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> A 60 GHz switched beam chip-to-chip antenna array is introduced. The array consists of four center fed circular patch elements with side vias in a $2{\times} 2$ grid arrangement forming a planar array. The array is designed to fit on a typical multicore chip for reconfigurable interchip wireless communication. The array main beam is switched by changing the interelement phase shifts in the azimuth plane. The switching of the main beam is analyzed and verified through full-wave simulation. The design presented is an improvement over a previous design of a two-element antenna array. The Friis transmission equation with polarization components taken into account is used to model the interchip wireless link. To verify the model, a transmission coefficient measurement is made between a pair of the two-element arrays separated by a 10 mm distance. Both simulated and measured radiation patterns of the two-element array are presented for use in the Friis equation to calculate the transmission coefficients. Full-wave simulation of the array pair is also performed. The calculated results obtained from the Friis model agree well with both the measured and full-wave simulation results. The Friis model is used to calculate both signal and interference levels. <s> BIB013 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> Wireless Network-on-Chip (WNoC) appears as a promising alternative to conventional interconnect fabrics for chip-scale communications. The WNoC paradigm has been extensively analyzed from the physical, network and architecture perspectives assuming mmWave band operation. However, there has not been a comprehensive study at this band for realistic chip packages and, thus, the characteristics of such wireless channel remain not fully understood. This work addresses this issue by accurately modeling a flip-chip package and investigating the wave propagation inside it. Through parametric studies, a locally optimal configuration for 60 GHz WNoC is obtained, showing that chip-wide attenuation below 32.6 dB could be achieved with standard processes. Finally, the applicability of the methodology is discussed for higher bands and other integrated environments such as a Software-Defined Metamaterial (SDM). <s> BIB014 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> IV. CHANNEL MODELS FOR MILLIMETER WAVES <s> Ubiquitous multicore processors nowadays rely on an integrated packet-switched network for cores to exchange and share data. The performance of these intra-chip networks is a key determinant of the processor speed and, at high core counts, becomes an important bottleneck due to scalability issues. To address this, several works propose the use of mm-wave wireless interconnects for intra-chip communication and demonstrate that, thanks to their low-latency broadcast and system-level flexibility, this new paradigm could break the scalability barriers of current multicore architectures. However, these same works assume 10+ Gb/s speeds and efficiencies close to 1 pJ/bit without a proper understanding on the wireless intra-chip channel. This paper first demonstrates that such assumptions do not hold in the context of commercial chips by evaluating losses and dispersion in them. Then, we leverage the system's monolithic nature to engineer the channel, this is, to optimize its frequency response by carefully choosing the chip package dimensions. Finally, we exploit the static nature of the channel to adapt to it, pushing efficiency-speed limits with simple tweaks at the physical layer. Our methods reduce the path loss and delay spread of a simulated commercial chip by 47 dB and 7.3x, respectively, enabling intra-chip wireless communications over 10 Gb/s and only 3.1 dB away from the dispersion-free case. <s> BIB015
The study of the wireless channel at the chip scale has mostly raised interest in the last decade with the advent of mmWave integrated antennas and compact transceivers. However, the works that provided the first rudimentary chip-scale channel models dating back from the early 2000s explored the use of lower frequencies. More specifically, Kenneth K. O's group from University of Florida pioneered the field by unveiling the first measurements between integrated antennas located within the same chip at the 6-18 GHz band BIB002 , BIB003 , BIB001 . Those works not only showed the relatively high loss introduced by the channel (around 60 dB), but also discussed the potential effects of the chip package or the role of the dielectrics used for thermal aspects. The latter two aspects, however, have not been investigated again until recently. Table 3 shows a comprehensive summary of the efforts that followed the pioneering efforts from BIB002 , BIB003 , BIB001 . It can be observed that progress in mmWave integrated antennas BIB004 , and pioneering works in WNoC BIB007 in the late 2000s renewed the interest in this area. Some works appeared in the 2007-2013 period, followed by a significant surge of papers from 2017 to date. Most efforts have been centered in the more mature bands between 20 GHz and 60 GHz BIB008 , BIB005 , BIB013 , BIB006 , BIB010 , BIB014 , , BIB009 , with some forays into frequencies over 100 GHz BIB011 , , BIB012 , BIB015 .
Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> A. FREQUENCY DOMAIN <s> To further understand and explain the characteristics of integrated antennas in silicon substrates, an intuitive plane wave model is proposed. The model has been validated by quantitatively explaining the location of dips in the antenna gain versus frequency plots when a glass layer is inserted between a silicon wafer and a metal chuck using the interference effect between two propagating waves. These also convincingly demonstrated that the signal coupling between integrated antennas is due to wave phenomena rather than simple R-C coupling. Experiments have been carried out to characterize integrated dipole antennas. <s> BIB001 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> A. FREQUENCY DOMAIN <s> Inserting an aluminum nitride (AlN) layer which acts as a dielectric propagating medium between a silicon wafer containing integrated antennas and a metal chuck emulating the role of a heat sink improves the antenna power transmission gain by /spl sim/8 dB at 15 GHz. AlN, with its high thermal conductivity, also alleviates the heat removal problem. With a 760-/spl mu/m AlN layer, an on-chip wireless connection is demonstrated over a 2.2-cm distance, which is 3/spl times/ the previously reported separation. <s> BIB002 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> A. FREQUENCY DOMAIN <s> This paper presents the design of a 60 GHz antenna to be used in multi-core multi-chip (MCMC) computing systems. The antenna in package (AiP) solution has a ground-shielded metal and a periodically-patched artificial magnetic conductor (AMC) structure to widen the reflection coefficient bandwidth. The designed antenna with AMC layer broadcasts signals in the horizontal direction. Both simulated and measured results demonstrate that a $-10~{\hbox {dB}}$ reflection coefficient is achieved for a 10 GHz bandwidth and that radiation in the horizontal (chip-to-chip) direction is maintained. <s> BIB003 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> A. FREQUENCY DOMAIN <s> SoC (System on chip) technology has rapidly developed in recent years, stimulating emerging research areas such as investigating the efficacy of wireless network interconnection within a single chip or between multiple chips. However the design of the on-chip antenna faces the challenge of obtaining high radiation efficiency and transmission gain due to conductive loss of the silicon substrate. A new on-chip propagation mechanism of radio waves, which takes advantage of the un-doped silicon layer, is developed in order to overcome this challenge. It was found that by properly designing the dimension of silicon wafer, the un-doped silicon layer is able to act like a waveguide. Most of the energy is directed to the approximately lossless the undoped silicon layer of high resistivity instead of attenuating in the doped silicon substrate or radiating to the air. HFSS modeling and simulation results are provided to show that efficiency, gain and directivity of the on-chip antenna are greatly improved. In addition, this type of antennas can be easily reconfigured, which as a result, makes wireless SoCs with wireless interconnects or even a wireless network on PCB (Printed Circuit Board) possible. <s> BIB004 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> A. FREQUENCY DOMAIN <s> Hybrid wired-wireless Network-on-Chip (WiNoC) has emerged as an alternative solution to the poor scalability and performance issues of conventional wireline NoC design for future System-on-Chip (SoC). Existing feasible wireless solution for WiNoCs in the form of millimeter wave (mm-Wave) relies on free space signal radiation which has high power dissipation with high degradation rate in the signal strength per transmission distance. Moreover, over the lossy wireless medium, combining wireless and wireline channels drastically reduces the total reliability of the communication fabric. Surface wave has been proposed as an alternative wireless technology for low power on-chip communication. With the right design considerations, the reliability and performance benefits of the surface wave channel could be extended. In this paper, we propose a surface wave communication fabric for emerging WiNoCs that is able to match the reliability of traditional wireline NoCs. First, we propose a realistic channel model which demonstrates that existing mm-Wave WiNoCs suffers from not only free-space spreading loss (FSSL) but also molecular absorption attenuation (MAA), especially at high frequency band, which reduces the reliability of the system. Consequently, we employ a carefully designed transducer and commercially available thin metal conductor coated with a low cost dielectric material to generate surface wave signals with improved transmission gain. Our experimental results demonstrate that the proposed communication fabric can achieve a 5 dB operational bandwidth of about 60 GHz around the center frequency (60 GHz). By improving the transmission reliability of wireless layer, the proposed communication fabric can improve maximum sustainable load of NoCs by an average of $20.9$ and $133.3$ percent compared to existing WiNoCs and wireline NoCs, respectively. <s> BIB005 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> A. FREQUENCY DOMAIN <s> We propose a novel antenna design enabled by 3-D printing technology for future wireless intrachip interconnects aiming at applications of multicore architectures and system-on-chips. In our proposed design we use vertical quarter-wavelength monopoles at 160 GHz on a ground plane to avoid low antenna radiation efficiency caused by the silicon substrate. The monopoles are surrounded by a specially designed dielectric property distribution. This additional degree of freedom in design enabled by 3-D printing technology is used to tailor the electromagnetic wave propagation. As a result, the desired wireless link gain is enhanced and the undesired spatial crosstalk is reduced. Simulation results show that the proposed dielectric loading approach improves the desired link gain by 8–15 dB and reduces the crosstalk by 9–23 dB from 155 to 165 GHz. As a proof-of-concept, a 60 GHz prototype is designed, fabricated, and characterized. Our measurement results match the simulation results and demonstrate 10–18 dB improvement of the desired link gain and 10–30 dB reduction in the crosstalk from 55 to 61 GHz. The demonstrated transmission loss of the desired link at a distance of 17 mm is only 15 dB, which is over 10 dB better than the previously reported work. <s> BIB006 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> A. FREQUENCY DOMAIN <s> Wireless Networks-on-Chip (WiNoC) are being explored for parallel applications to improve the performances by reducing the long distance/critical path communications. However, WiNoC still require precise propagation models to go beyond proof of concept and to demonstrate it can be considered as a realistic efficient alternative to wired NoC. In this paper, we present accurate 3D models based on measurements in Ka band and Electromagnetic (EM) simulations of transmission on silicon substrate in the V band and the Sub-THz band. Using these EM results, a time-domain simulation is performed using an On-Off Keying (OOK) modulation based transmission with different PA/LNA configurations. Our results highlight the type of performances and tradeoffs to be considered according to different parameters such as power output and amplifier's gain. By improving the knowledge about the signal propagation, one can conduct precise design space exploration for parallel applications. We discuss the realistic channel modeling and we present also hybrid solutions and associated limitations of WiNoC architectures. We conclude the paper with research directions to be explored to make WiNoC a reality. <s> BIB007 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> A. FREQUENCY DOMAIN <s> A 60 GHz switched beam chip-to-chip antenna array is introduced. The array consists of four center fed circular patch elements with side vias in a $2{\times} 2$ grid arrangement forming a planar array. The array is designed to fit on a typical multicore chip for reconfigurable interchip wireless communication. The array main beam is switched by changing the interelement phase shifts in the azimuth plane. The switching of the main beam is analyzed and verified through full-wave simulation. The design presented is an improvement over a previous design of a two-element antenna array. The Friis transmission equation with polarization components taken into account is used to model the interchip wireless link. To verify the model, a transmission coefficient measurement is made between a pair of the two-element arrays separated by a 10 mm distance. Both simulated and measured radiation patterns of the two-element array are presented for use in the Friis equation to calculate the transmission coefficients. Full-wave simulation of the array pair is also performed. The calculated results obtained from the Friis model agree well with both the measured and full-wave simulation results. The Friis model is used to calculate both signal and interference levels. <s> BIB008 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> A. FREQUENCY DOMAIN <s> Wireless Network-on-Chip (WNoC) appears as a promising alternative to conventional interconnect fabrics for chip-scale communications. WNoC takes advantage of an overlaid network composed by a set of millimeter-wave antennas to reduce latency and increase throughput in the communication between cores. Similarly, wireless inter-chip communication has been also proposed to improve the information transfer between processors, memory, and accelerators in multi-chip settings. However, the wireless channel remains largely unknown in both scenarios, especially in the presence of realistic chip packages. This work addresses the issue by accurately modeling flip-chip packages and investigating the propagation both its interior and its surroundings. Through parametric studies, package configurations that minimize path loss are obtained and the trade-offs observed when applying such optimizations are discussed. Single-chip and multi-chip architectures are compared in terms of the path loss exponent, confirming that the amount of bulk silicon found in the pathway between transmitter and receiver is the main determinant of losses. <s> BIB009 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> A. FREQUENCY DOMAIN <s> Long range, low latency wireless links in Networks-on-Chip (NoCs) have been shown to be the most promising solution to provide high performance intra/inter-chip communication in many core era. Significant advancements have been made in design of both Wireless NoC (WNoC) topologies and transceiver circuits to support wireless communication at chip level. However, a comprehensive understanding of wireless physical layer and its impact on performance is still lacking. There is still a lot of scope for thorough analysis of the affects of intra-chip wireless channel and antenna characteristics on signal transmission and link reliability in WNoCs. To this end, we analyse signal propagation through wireless channel by accurately modelling the intra-chip environment. We analyse the effects of antenna placement across chip plane and its directionality on the signal loss, delay and dispersion properties. The analysis shows that directional antenna exhibits better delay characteristics, while omnidirectional antennas have low loss for signal transmission in the channel. Furthermore, the placement of antenna shows considerable impact on channel characteristics due to reflections from chip edges and constructive or destructive interference between the multiple signal components. This work provides crucial insights into propagation characteristics of on-chip wireless links for better design of transceiver components and their performance. <s> BIB010
Frequency domain analysis has driven most of the efforts, highlighting the importance of path loss in the feasibility of chip-scale links. The pioneering works of BIB002 , BIB001 clearly showed that wireless links within standard chips and packages have very large attenuation on the order of 50 dB or more for several millimeters distance, which is a clear roadblock. Full-wave simulations of a standard flip-chip package, reproduced in Figure 6 , confirmed that path loss can exceed 70 dB for a few centimeters distance. To put such figures in context, recent on-chip mmWave transceivers with reasonable efficiency (2 pJ/bit in ) considered an attenuation of 26.5 dB between transmitter and receiver. Subsequently, Zhang et al. tested high-resistivity silicon as a way to reduce losses induced by the lossy substrate. This method achieved improvements of around 20-30 dB and has been adopted by Yan and Hanson or El-Masri et al. BIB007 , . It was further learnt from that metallic interferent structures in the form of normal or parallel strips located between the transmitter and receiver can enhance the band pass characteristic of the intra-chip propagation channel, reducing losses by a few dB. Non-standard packages have been also proposed in an attempt to minimize or eliminate the impact of lossy silicon. For instance, the authors in BIB004 reduce attenuation to 15-30 dB using a layer of undoped silicon below the die substrate. Further, the works in Melde's group at University of Arizona BIB003 , BIB008 resort to metamaterial-like structures in open chip schemes to enhance the coupling of surface-waves and reduce amount of energy radiated away from the chip and into the silicon. Similarly, several variants of a vertical monopole partially or completely inserted within a dielectric waveguide have been proposed , , BIB005 , achieving outstanding results with less than 10 dB losses at times. Notably, Wu et al. BIB006 propose a 3D-printed optimized dielectric attempting to jointly optimize several links within a single package. VOLUME 8, 2020 Most of the efforts to reduce the path loss have achieved their objective resorting to non-standard processes. The improvement of high-resistivity silicon comes at the cost of worse performance in digital circuits, whereas the use of some proposed waveguiding techniques adds several manufacturing and custom packaging steps, which is undesirable. Thus, there has been also a renewed interest in solutions compatible with standard packages. In this direction, Timoneda et al. have evaluated the impact of optimizing the silicon and thermal interface material thicknesses in TSV-based links within a flip-chip package BIB009 , taking chip-wide losses close to 30 dB (see Figure 6 for partial results). In that work, it was also discussed that the metallization layers within the chip or the interposer would likely block the signals due to the small pitch of such layers when compared to the transmission wavelength. Related to this, it has been demonstrated that the path loss exponent in the optimized cases is between 1 and 1.4, which confirms the waveguiding effect of the whole flip-chip package. Finally, it is worth noting that other aspects such the antenna orientation or placement also have an impact BIB010 in ways compatible to standard packaging.
Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> B. TIME DOMAIN <s> Current commercial systems-on-chips (SoCs) designs integrate an increasingly large number of predesigned cores and their number is predicted to increase significantly in the near future. For example, molecular-scale computing promises single or even multiple order-of-magnitude improvements in device densities. The network-on-chip (NoC) is an enabling technology for integration of large numbers of embedded cores on a single die. The existing method of implementing a NoC with planar metal interconnects is deficient due to high latency and significant power consumption arising out of long multi-hop links used in data exchange. The latency, power consumption and interconnect routing problems of conventional NoCs can be addressed by replacing or augmenting multi-hop wired paths with high-bandwidth single-hop long-range wireless links. This opens up new opportunities for detailed investigations into the design of wireless NoCs (WiNoCs) with on-chip antennas, suitable transceivers and routers. Moreover, as it is an emerging technology, the on-chip wireless links also need to overcome significant challenges pertaining to reliable integration. In this paper, we present various challenges and emerging solutions regarding the design of an efficient and reliable WiNoC architecture. <s> BIB001 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> B. TIME DOMAIN <s> We review the current state of the art on antennas for use in wireless networks on chips (WiNoCs) and also provide results on wireless channel characteristics in the WiNoC setting—the latter are largely absent from the literature. We first describe the motivation for constructing these miniature networks, aimed at improving efficiency of future multi-processor integrated circuits. We then discuss the implications for antennas: in addition to the usual antenna parameters for communication links (gain, impedance match, pattern), this includes important structural and multiple-access considerations. After a review of the literature and a summary of published antenna characteristics and future challenges, we present example results for a representative structure to illustrate antenna performance and WiNoC channel characteristics. <s> BIB002 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> B. TIME DOMAIN <s> Ubiquitous multicore processors nowadays rely on an integrated packet-switched network for cores to exchange and share data. The performance of these intra-chip networks is a key determinant of the processor speed and, at high core counts, becomes an important bottleneck due to scalability issues. To address this, several works propose the use of mm-wave wireless interconnects for intra-chip communication and demonstrate that, thanks to their low-latency broadcast and system-level flexibility, this new paradigm could break the scalability barriers of current multicore architectures. However, these same works assume 10+ Gb/s speeds and efficiencies close to 1 pJ/bit without a proper understanding on the wireless intra-chip channel. This paper first demonstrates that such assumptions do not hold in the context of commercial chips by evaluating losses and dispersion in them. Then, we leverage the system's monolithic nature to engineer the channel, this is, to optimize its frequency response by carefully choosing the chip package dimensions. Finally, we exploit the static nature of the channel to adapt to it, pushing efficiency-speed limits with simple tweaks at the physical layer. Our methods reduce the path loss and delay spread of a simulated commercial chip by 47 dB and 7.3x, respectively, enabling intra-chip wireless communications over 10 Gb/s and only 3.1 dB away from the dispersion-free case. <s> BIB003
While it seems that the path loss has reached levels ensuring a reasonable efficiency, little has been reported about the dispersive nature of the intra-/inter-chip wireless channels. This is of crucial importance as dispersive channels limit the symbol rate used in the transmissions which, coupled to the simple low-order modulations expected for this scenario, could be a hard constrain in the achievable data rate. In their theoretical work, Matolak et al. predicted worst-case values of several nanoseconds using the micro-reverberation chamber model at mmWave and THz frequencies . First measurements of the power-delay profile of open chip schemes, on the contrary, yielded delay spread figures on the order of 100 ps for transmissions at . This is because the reverberation chamber model assumes full encasement and does not take dielectric losses into account, thus increasing the importance of the reflections. For flip-chip and custom packages, which fall in between these two extremes, the simulated delay spread has been of a few hundreds of picoseconds , BIB002 (see Figure 7 for a partial reproduction of data from BIB003 ). In this context, it may not be possible to provide the speeds of several tens of Gb/s promised in several works BIB001 , since the coherence bandwidth would be around a few GHz at most. Since the channel is quasi-static and quasi-deterministic, time-domain results deliver insight on the chip-scale propagation mechanisms. For instance, the measurement results from on an open chip show that the first signal peak is found significantly later than that of free-space propagation, which suggests the surface waves along the air-wafer interface dominate. Within flip-chip structures, it can be proven that most dispersion comes from the reflections that signals suffer at the different interfaces within the package. Therefore, waveguide-like and enclosed structures are equally crucial to minimize delay spread just as they are indispensable for path loss. Having this in mind, Timoneda et al. proposed to optimize the flip-chip package taking dispersion into account BIB003 . For instance, it was shown that thinning down the silicon can have a positive effect on the delay spread, as shown in Figure 7 . With more exhaustive explorations, the authors were able to reduce the worst-case delay spread below 100 ps while maintaining a reasonable path loss, ensuring a chip-wide coherence bandwidth over 10 GHz.
Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> C. CHANNEL MODELING METHODS <s> To further understand and explain the characteristics of integrated antennas in silicon substrates, an intuitive plane wave model is proposed. The model has been validated by quantitatively explaining the location of dips in the antenna gain versus frequency plots when a glass layer is inserted between a silicon wafer and a metal chuck using the interference effect between two propagating waves. These also convincingly demonstrated that the signal coupling between integrated antennas is due to wave phenomena rather than simple R-C coupling. Experiments have been carried out to characterize integrated dipole antennas. <s> BIB001 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> C. CHANNEL MODELING METHODS <s> Inserting an aluminum nitride (AlN) layer which acts as a dielectric propagating medium between a silicon wafer containing integrated antennas and a metal chuck emulating the role of a heat sink improves the antenna power transmission gain by /spl sim/8 dB at 15 GHz. AlN, with its high thermal conductivity, also alleviates the heat removal problem. With a 760-/spl mu/m AlN layer, an on-chip wireless connection is demonstrated over a 2.2-cm distance, which is 3/spl times/ the previously reported separation. <s> BIB002 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> C. CHANNEL MODELING METHODS <s> An intra-chip wireless interconnect system using on-chip antennas and ultrawideband (UWB) radios that operates in 22-29 GHz is studied in this paper. The on-chip antennas are meander monopoles of axial length 1 mm in silicon technology. A unique wireless channel is formed between a pair of on-chip transmit and receive antennas. The channel is characterized up to an interconnect distance of 40 mm. The system performance is evaluated in terms of bit-error-rate (BER) under the assumptions of perfect system synchronization and signal corruption from thermal and switching noises. As expected, the system performance degrades with interconnect distance and data rate. It achieves a better BER on the 5-k?.cm Si substrate than that on the 10-?.cm Si substrate. <s> BIB003 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> C. CHANNEL MODELING METHODS <s> A 43-GHz wireless inter-chip data link including antennas, transmitters, and receivers is presented. The industry standard bonding wires are exploited to provide high efficiency and low-cost antennas. This type of antennas can provide an efficient horizontal communication which is hard to achieve using conventional on-chip antennas. The system uses binary amplitude shift keying (ASK) modulation to keep the design compact and power efficient. The transmitter includes a differential to single-ended modulator and a two-stage power amplifier (PA). The receiver includes a low-noise amplifier (LNA), pre-amplifiers, envelope detectors (ED), a variable gain amplifier (VGA), and a comparator. The chip is fabricated in 180-nm SiGe BiCMOS technology. With power-efficient transceivers and low-cost high-performance antennas, the implemented inter-chip link achieves bit-error rate (BER) around 10-8 for 6 Gb/s over a distance of 2 cm. The signal-to-noise ratio (SNR) of the recovered signal is about 24 dB with 18 ps of rms jitter. The transmitter and receiver consume 57 mW and 60 mW, respectively, including buffers. The bit energy efficiency excluding test buffers is 17 pJ/bit. The presented work shows the feasibility of a low power high data rate wireless inter-chip data link and wireless heterogeneous multi-chip networks. <s> BIB004 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> C. CHANNEL MODELING METHODS <s> This letter demonstrates the feasibility of applying I/O pins as chip Tx/Rx antennas for implementing wireless inter/intra-chip communications (WIICs). An innovative printed circuit board (PCB) medium is presented as a signal propagation channel, which is specially bounded by a metamaterial electromagnetic wave absorber to improve electromagnetic environment pollution. Presented is a 20.4-GHz WIIC communication system, mainly including a transmitter and a receiver. The bit-error-rate (BER) performance of a coherent binary phase-shift keying interconnect system is evaluated. It is shown that the system performance degrades as the separation distance of the transceivers increases. A data rate of 1 Gb/s with a BER at the level of 10 -5 on the PCB investigated is achieved for the transmitted power of 10 dBm. <s> BIB005 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> C. CHANNEL MODELING METHODS <s> We review the current state of the art on antennas for use in wireless networks on chips (WiNoCs) and also provide results on wireless channel characteristics in the WiNoC setting—the latter are largely absent from the literature. We first describe the motivation for constructing these miniature networks, aimed at improving efficiency of future multi-processor integrated circuits. We then discuss the implications for antennas: in addition to the usual antenna parameters for communication links (gain, impedance match, pattern), this includes important structural and multiple-access considerations. After a review of the literature and a summary of published antenna characteristics and future challenges, we present example results for a representative structure to illustrate antenna performance and WiNoC channel characteristics. <s> BIB006 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> C. CHANNEL MODELING METHODS <s> On-chip wireless links operating at millimeter wave frequencies offer the most promising solution to overcome the multi-hop latency and high power consumption of metal interconnects in Network-on-Chip (NoC) platforms. Design of efficient transceivers, that are resilient to channel effects is essential to achieve high performance on-chip wireless communication. In this work, we present a spectrally efficient Orthogonal Frequency Division Multiplexing (OFDM) based transceiver, operating at mm-wave frequencies for on-chip wireless interconnects. The design targets to provide high data rate with low area and power overheads, while handling channel effects and inter symbol interference. It achieves data rate of 195.32Gbps at 0.132pJ/bit using 256 orthogonal subchannels and transmission bandwidth of 25 GHz. The area occupied is 0.092mm2 using 32 nm technology. The system level evaluation of 64 core Wireless NoC (WNoC) with proposed OFDM scheme provides 42% and 61.6% reduction respectively in latency and energy as compared to wired mesh topology. <s> BIB007 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> C. CHANNEL MODELING METHODS <s> Wireless Network-on-Chip (WNoC) appears as a promising alternative to conventional interconnect fabrics for chip-scale communications. The study of the channel inside the chip is essential to minimize latency and power. However, this requires long and computationally-intensive simulations which take a lot of time. We propose and implement an analytical model of the EM propagation inside the package based on ray tracing. This model could compute the electric field intensity inside the chip reducing the computational time several orders of magnitude with an average mismatch of only 1.7 dB. <s> BIB008 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> C. CHANNEL MODELING METHODS <s> Wireless Network-on-Chip (WNoC) appears as a promising alternative to conventional interconnect fabrics for chip-scale communications. The WNoC paradigm has been extensively analyzed from the physical, network and architecture perspectives assuming mmWave band operation. However, there has not been a comprehensive study at this band for realistic chip packages and, thus, the characteristics of such wireless channel remain not fully understood. This work addresses this issue by accurately modeling a flip-chip package and investigating the wave propagation inside it. Through parametric studies, a locally optimal configuration for 60 GHz WNoC is obtained, showing that chip-wide attenuation below 32.6 dB could be achieved with standard processes. Finally, the applicability of the methodology is discussed for higher bands and other integrated environments such as a Software-Defined Metamaterial (SDM). <s> BIB009 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> C. CHANNEL MODELING METHODS <s> Ubiquitous multicore processors nowadays rely on an integrated packet-switched network for cores to exchange and share data. The performance of these intra-chip networks is a key determinant of the processor speed and, at high core counts, becomes an important bottleneck due to scalability issues. To address this, several works propose the use of mm-wave wireless interconnects for intra-chip communication and demonstrate that, thanks to their low-latency broadcast and system-level flexibility, this new paradigm could break the scalability barriers of current multicore architectures. However, these same works assume 10+ Gb/s speeds and efficiencies close to 1 pJ/bit without a proper understanding on the wireless intra-chip channel. This paper first demonstrates that such assumptions do not hold in the context of commercial chips by evaluating losses and dispersion in them. Then, we leverage the system's monolithic nature to engineer the channel, this is, to optimize its frequency response by carefully choosing the chip package dimensions. Finally, we exploit the static nature of the channel to adapt to it, pushing efficiency-speed limits with simple tweaks at the physical layer. Our methods reduce the path loss and delay spread of a simulated commercial chip by 47 dB and 7.3x, respectively, enabling intra-chip wireless communications over 10 Gb/s and only 3.1 dB away from the dispersion-free case. <s> BIB010 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> C. CHANNEL MODELING METHODS <s> The primary objective of this paper is to investigate the communication capabilities of short-range millimeter-wave (mmWave) communication among network-on-chip (NoC)-based multi-core processors integrated on a substrate board. This paper presents the characterization of transmission between on-chip antennas for both intra- and inter-chip communication in multi-chip computing systems, such as server blades or embedded systems. Through simulation at 30 GHz, we have characterized the inter-chip transmission and studied the electric field distribution to explain the transmission characteristics. It is shown that the antenna radiation efficiency reduces with a decrease in the resistivity of silicon. The simulation results have been validated with fabricated antennas in different orientations on silicon dies that can communicate with inter-chip transmission coefficients ranging from −45 to −60 dB while sustaining bandwidths up to 7 GHz. Using measurements, a large-scale log-normal channel model is derived, which can be used for system-level architecture design. Using the same simulation environment, we perform design and analysis at 60 GHz to provide another non-interfering frequency channel for inter-chip communication in order to increase the physical bandwidth of the interconnection architecture. Furthermore, densely packed multilayer copper wires in NoCs have been modeled in this paper to study their impact on the wireless transmission for both intra- and inter-chip links. The dense orthogonal multilayer wires are shown to be equivalent to copper sheets. In addition, we have shown that the antenna radiation efficiency reduces in the presence of these densely packed wires placed in the close proximity of the antenna elements. Using this model, the reduction of inter-chip transmission is quantified to be about 20 dB compared with a system with no wires. Furthermore, the transmission characteristics of the antennas resonating at 60 GHz in a flip-chip packaging environment are also presented. <s> BIB011
From the perspective of the channel modeling method, EM field analysis has been only utilized in the work by Yan et al. . With numerical methods over field integrals, the authors obtained the field distributions of a Hertzian dipole over an open chip at 15-90 GHz. Moreover, different combinations of layers emulating the thermal dissipation material or a custom dielectric waveguides were analyzed following the discussions from BIB002 , BIB001 . It was concluded that, depending on the frequency, the surface-wave mode could be important and that the waveguiding layers can deliver important enhancements. Besides this and the micro-reverberation and two-ray models proposed in , most works have focused on full-wave solving and actual measurements. Due to the relatively reduced size of the environment at mmWave frequencies, there have been no serious attempts at using ray tracing in this band BIB008 . From the perspective of the considered antenna, there has been a shift from printed dipole and its variants , aiming to facilitate the fabrication and measurement of samples, to a set of research groups that have considered vertical monopoles through-silicon , BIB009 or through other dielectrics BIB006 in full-wave simulation studies. Packagewise, open chip or custom packages have been evaluated most frequently BIB003 , , BIB007 for the same reason and despite of the pioneering remarks on flip-chip package compatibility made in early stages of this research area BIB002 . However, there has been a recent surge of works considering schemes compatible with flip-chip packages BIB005 , BIB010 , BIB011 , which have been mostly based on full-wave simulation due to the cost of instrumenting flip-chip packages. Finally, bond-wire antennas/packages BIB004 have been less relevant given the widespread use of the flip-chip.
Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> V. CHANNEL MODELS IN THE TERAHERTZ BAND <s> This paper describes an unconventional way to apply wireless networking in emerging technologies. It makes the case for using a two-tier hybrid wireless/wired architecture to interconnect hundreds to thousands of cores in chip multiprocessors (CMPs), where current interconnect technologies face severe scaling limitations in excessive latency, long wiring, and complex layout. We propose a recursive wireless interconnect structure called the WCube that features a single transmit antenna and multiple receive antennas at each micro wireless router and offers scalable performance in terms of latency and connectivity. We show the feasibility to build miniature on-chip antennas, and simple transmitters and receivers that operate at 100-500 GHz sub-terahertz frequency bands. We also devise new two-tier wormhole based routing algorithms that are deadlock free and ensure a minimum-latency route on a 1000-core on-chip interconnect network. Our simulations show that our protocol suite can reduce the observed latency by 20% to 45%, and consumes power that is comparable to or less than current 2-D wired mesh designs. <s> BIB001 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> V. CHANNEL MODELS IN THE TERAHERTZ BAND <s> The terahertz spectral region is desirable for applications such as imaging or spectroscopy, but progress is hampered by a lack of efficient terahertz devices. By exploiting intraband transitions in graphene, Sensale-Rodriguez et al. demonstrate a broadband intensity modulator working at terahertz frequencies. <s> BIB002 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> V. CHANNEL MODELS IN THE TERAHERTZ BAND <s> Nanonetworks, i.e., networks of nano-sized devices, are the enabling technology of long-awaited applications in the biological, industrial and military fields. For the time being, the size and power constraints of nano-devices limit the applicability of classical wireless communication in nanonetworks. Alternatively, nanomaterials can be used to enable electromagnetic (EM) communication among nano-devices. In this paper, a novel graphene-based nano-antenna, which exploits the behavior of Surface Plasmon Polariton (SPP) waves in semi-finite size Graphene Nanoribbons (GNRs), is proposed, modeled and analyzed. First, the conductivity of GNRs is analytically and numerically studied by starting from the Kubo formalism to capture the impact of the electron lateral confinement in GNRs. Second, the propagation of SPP waves in GNRs is analytically and numerically investigated, and the SPP wave vector and propagation length are computed. Finally, the nano-antenna is modeled as a resonant plasmonic cavity, and its frequency response is determined. The results show that, by exploiting the high mode compression factor of SPP waves in GNRs, graphene-based plasmonic nano-antennas are able to operate at much lower frequencies than their metallic counterparts, e.g., the Terahertz Band for a one-micrometer-long ten-nanometers-wide antenna. This result has the potential to enable EM communication in nanonetworks. <s> BIB003 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> V. CHANNEL MODELS IN THE TERAHERTZ BAND <s> In this paper, a plasmonic nano-transceiver for wireless communication in the Terahertz Band (0.1-10 THz) is proposed, modeled and analyzed. The nano-transceiver is based on a High Electron Mobility Transistor (HEMT) built with a III-V semiconductor and enhanced with graphene. In transmission, when a voltage is applied between the HEMT drain and source, electrons are accelerated at the HEMT channel. This movement of electrons results in the excitation of a plasma wave which, on its turn, induces a Surface Plasmon Polariton (SPP) wave on the graphene-based gate. The reciprocal behavior is achieved in reception. The performance of the proposed nano-transceiver is analytically investigated in transmission by coupling the hydrodynamic equations that govern the generation of plasma waves in the HEMT, with the dynamic complex conductivity of graphene and the Maxwell's equations. Numerical results show that the proposed nano-transceiver can effectively generate the necessary SPP wave to drive a plasmonic nano-antenna at Terahertz Band frequencies. Moreover, the utilization of the same nanomaterial as in the plasmonic nano-antennas is expected to ease the transceiver-antenna integration and opens the door to tunable compact nano-transceivers for Terahertz Band communication. <s> BIB004 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> V. CHANNEL MODELS IN THE TERAHERTZ BAND <s> Paulo Andre Goncalves acknowledges the Calouste Gulbenkian Foundation for financial support through the grant “Premio Estimulo a Investigacao 2013 "1 (No. 132394) and the hospitality of the Centre of Physics of the University of Minho, where most of this book was written. Nuno Peres acknowledges financial support from the Graphene Flagship Project (Contract No. CNECT-ICT-604391). <s> BIB005 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> V. CHANNEL MODELS IN THE TERAHERTZ BAND <s> In this paper, we show how plasma-wave instability in an asymmetrically biased ungated InGaAs high-electron mobility transistor (HEMT) leads to terahertz emissions. Numerical calculations are provided using a new Maxwell-hydrodynamic solver. Using this solver, an accurate plasma-wave model is presented, accounting for nonuniform surroundings and finite dimensions of the 2D electron gas (2DEG) layer within the HEMT. We estimate that hundreds of nanowatts of power can be expected from such devices under ideal boundary conditions and sufficient channel mobility. Effects due to variations of carrier velocity, carrier concentration, and 2DEG confinement on the emitted power levels are also considered to provide design guidelines. <s> BIB006 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> V. CHANNEL MODELS IN THE TERAHERTZ BAND <s> In this paper, a graphene-based plasmonic phase modulator for Terahertz band (0.1–10 THz) communication is proposed, modeled and analyzed. The modulator is based on a fixed-length graphene-based plasmonic waveguide, and leverages the possibility to tune the propagation speed of Surface Plasmon Polariton (SPP) waves on graphene by modifying the Fermi energy of the graphene layer. An analytical model for the modulator is developed starting from the dynamic complex conductivity of graphene and a revised dispersion equation for SPP waves in gated graphene structures. By utilizing the model, the performance of the modulator is analyzed in terms of symbol error rate when utilized to implement a M-ary digital phase shift keying modulation. The model is validated by means of electromagnetic simulations, and numerical results are provided to illustrate the performance of the modulator. <s> BIB007 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> V. CHANNEL MODELS IN THE TERAHERTZ BAND <s> The field of terahertz integrated technology has undergone significant development in the past ten years. This has included work on different substrate technologies such as III–V semiconductors and silicon, work on field-effect transistor devices and heterojunction bipolar devices, and work on both fully electronic and hybrid electronic–photonic systems. While approaches in electronic and photonics can often seem distinct, techniques have blended in the terahertz frequency range and many emerging systems can be classified as photonics-inspired or hybrid. Here, we review the development of terahertz integrated electronic and hybrid electronic–photonic systems, examining, in particular, advances that deliver important functionalities for applications in communication, sensing and imaging. Many of the advances in integrated systems have emerged, not from improvements in single devices, but rather from new architectures that are multifunctional and reconfigurable and break the trade-offs of classical approaches to electronic system design. We thus focus on these approaches to capture the diversity of techniques and methodologies in the field. This Review Article examines the development of terahertz integrated electronic and hybrid electronic–photonic systems, considering, in particular, advances that deliver important functionalities for applications in communication, sensing and imaging. <s> BIB008 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> V. CHANNEL MODELS IN THE TERAHERTZ BAND <s> We report on the design, fabrication, and testing of a new generation of all-solid-state frequency-multiplied terahertz sources providing up to ten times more output power at room temperature than the previous state of the art. These new sources benefit from both a novel “on-chip” power combined topology recently introduced and new devices and circuits thoroughly optimized for high-power operation. Prototypes centered at 180 GHz, 240 GHz, 340 GHz, 530 GHz, 1 THz, and 1.6 THz have been designed, fabricated, and tested. All sources exhibit record performances with conversion efficiencies reaching the theoretical limit predicted by physics-based numerical models. Measured nominal room-temperature output power levels are 500 mW at 180 GHz, 110 mW at 220 GHz, 35 mW at 330 GHz, 30 mW at 550 GHz, 2 mW at 1.03 THz, and 0.7 mW at 1.64 THz. <s> BIB009 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> V. CHANNEL MODELS IN THE TERAHERTZ BAND <s> Wireless networks-on-chip (WiNoC) are envisioned as a promising technology to support the interconnection of hundreds of cores in the chip multi-processor design. To meet the demand for the Tera-bit-per-second (Tbps) ultra-fast links in the WiNoC, the millimeter wave and THz band (30 GHz-1 THz) with ultra-broad spectrum resource is proposed for WiNoC communications. In this paper, a hybrid WiNoC architecture and chip design are first described. Moreover, the physical models of propagations in lossy and lossless dielectric mediums are developed, by considering the realistic packaging environment. A multi-path WiNoC channel model is then presented in the millimeter wave and THz band based on ray-tracing techniques, which is validated with measurement data. Based on the developed channel model, the WiNoC propagation is thoroughly characterized by analyzing the time of arrival, the path gain, the coherence bandwidth, the channel capacity and the reliability. Simulation results show that 1 Tbps wireless link with the error rate below 10−14 is achievable for WiNoC communications in the millimeter wave and THz band. <s> BIB010 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> V. CHANNEL MODELS IN THE TERAHERTZ BAND <s> Wireless networks-on-chip (WiNoC) communications are envisioned as a promising technology to support the interconnection of hundreds of cores in the chip multi-processor design. To meet the future demand for Tera-bit-per-second (Tbps) ultra-fast links in the WiNoC, the millimeter wave (mmWave) and Terahertz (THz) bands with ultra-broad spectrum resource are proposed for WiNoC communications. In this paper, a hybrid WiNoC architecture and the stratified chip design are described, in which the flip-chip package and heat sink are considered. The electromagnetic fields in the WiNoC stratified medium are theoretically analyzed and verified with full-wave simulation. Based on the developed channel model, the WiNoC propagation is characterized by analyzing the path loss, the channel capacity and the reliability. Furthermore, the impact and guideline of the chip design on the WiNoC wave propagation are extensively evaluated and investigated. In particular, the wave propagation performance in THz WiNoC channel can be improved, by decreasing the underfill thickness, proper choice of the silicon thickness and inserting a bottom layer below the silicon substrate. <s> BIB011
Compared to RF and mmWave technology, THz technology is still at its infancy. More specifically, despite the major progress in the recent decade, the development of on-chip, compact and energy efficient THz signal modulators and detectors and ultra-broadband modulators and demodulators is still an open challenge. Nevertheless, beyond traditional electronic and photonic approaches BIB008 , BIB009 , the utilization of new materials such as graphene and the exploitation of new physics including plasmonics BIB005 is enabling the development of miniature on-chip direct THz sources and detectors BIB004 , BIB006 , modulators and demodulators BIB002 , BIB007 and antennas BIB003 , that can be utilized in WNoC. Motivated by these results, there has been a raising interest in the concept of on-chip THz communications and, as a result, a few pioneering works on channel modeling can be found in the related literature. Lee et al. simulate the intra-chip channel where antennas are placed in a polyimide layer in an open chip scheme, by using a full-wave solver, HFSS, at 300 GHz BIB001 . The authors report an attenuation of around 40 dB at 1 cm distance, and argue that, compared with a conventional on-chip antenna over silicon, the on-chip antenna placed in the low-loss dielectric polyimide layer improves the channel loss by 20-30 dB. As one of the first attempts for THz chip-scale propagation modeling, Chen et al. analyze the EM fields by using the Sommerfield integration method in the CMOS chip, and the results are validated with the full-wave solver HFSS BIB011 . As main observations, the path loss is highly frequency-selective due to the surface wave and guided wave propagation, as presented in Fig. 8 . The path loss is periodically oscillating in the THz band, for which the period is corresponding to the frequency between two adjacent surface wave modes. In this work, the impact of the chip design is analyzed, and chip design guidelines with the potential to improve the WNoC channel are provided. The thinner underfill layer and inserting a bottom layer between the silicon substrate and the heat sink are able to enhance the path gain. In addition, the thickness of the silicon substrate has a great impact on wave propagation, and it needs to be selected carefully according to the operating frequency. In , a two-ray model is utilized to estimate the path loss in an intra-chip channel, which includes the line-of-sight path and the path reflected from the plane where the transmit and receive antennas are located. Below the breakpoint d b = 2πh T h R /λ where h T and h R are the heights of the transmit and receive antennas, the path loss exponent is 2. Beyond the breakpoint, the path loss exponent increases to 4. On the one hand, the path loss is linearly proportional with the transmission distance in the log scale for frequencies under 10 THz and antenna heights smaller than 100 µm. On the other hand, when the frequency increases to 100 THz, the logscale path loss is highly distance-selective over the range from 10 −5 to 10 −2 m. The validity of this model, however, needs to be confirmed for different packaging schemes other than open chip. Recently, Chen et al. developed a multi-ray model by using the ray-tracing method for intra-chip channels within flip-chip packaging structures in the THz band (0.1-1 THz) BIB010 . Based on the developed channel model, observations can be drawn as follows. First, the intra-chip channel is highly frequency-selective due to the multi-path effect, although the molecular absorption does not exist in the chip. Second, the high-resistivity substrate results in a large delay spread, and thus narrow coherent bandwidth. This is owing to the fact that paths with longer time of arrivals suffer less attenuation in the substrate. Third, the capacity of the intra-chip channel can reach 150 Gbps and 1 Tbps with BER below 10 −14 when the transmit power is 1 dBm and 10 dBm, respectively, and the transmission distance is 40 mm.
Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VI. CHANNEL MODELS IN THE OPTICAL SPECTRUM <s> Five criteria that are usually considered by IEEE standards committees for development of next generation standards are broad market potential, distinct identity, and compatibility, as well as technical and economic feasibility. We consider these criteria separately and show that the new emerging large-volume markets loosely defined as Computercom will demand new standards and new technologies. We discuss how the balance between single-channel bit rate, and number of wavelength multiplexed and spatially multiplexed optical channels can help to satisfy the need for huge total bandwidth, while keeping cost low and power efficiency high. Silicon CMOS-integrated photonics holds promise to become a technology of choice for wide deployment of low-power and cost-effective optical interconnects for these new markets, and to become a single solution addressing distances spanning from just a meter to 10km. <s> BIB001 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VI. CHANNEL MODELS IN THE OPTICAL SPECTRUM <s> Optical connects will become a key point in the next generation of integrated circuits, namely the upcoming nanoscale optical chips. In this context, nano-optical wireless links using nanoantennas have been presented as a promising alternative to regular plasmonic waveguide links, whose main limitation is the range propagation due to the metal absorption losses. In this paper we present the complete design of a high-capability wireless nanolink using matched directive nanoantennas. It will be shown how the use of directive nanoantennas clearly enhances the capability of the link, improving its behavior with respect to non-directive nanoantennas and largely outperforming regular plasmonic waveguide connects. <s> BIB002 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VI. CHANNEL MODELS IN THE OPTICAL SPECTRUM <s> In this paper, we propose a novel hybrid plasmonic waveguide fed broadband optical patch nano-antenna for nanophotonic applications. Through full wave electromagnetic simulation, we demonstrated our proposed antenna to radiate and receive signal at all optical communication windows (e.g. $\lambda$ = 850nm, 1310nm&1550nm) with around 86% bandwidth within the operational domain. Moreover numerical results demonstrate that the proposed nano-antenna has directional radiation pattern with satisfactory gain over all three communication bands. Additionally, we evaluated the antenna performances with two different array arrangements (e.g. one dimensional and square array). The proposed broadband antenna can be used for prominent nanophotonic applications such as optical wireless communication in inter and intra-chip devices, optical sensing and optical energy harvesting etc. <s> BIB003 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VI. CHANNEL MODELS IN THE OPTICAL SPECTRUM <s> Structured light provides an additional degree of freedom for modern optics and practical applications. The effective generation of orbital angular momentum (OAM) lasing, especially at a micro- and nanoscale, could address the growing demand for information capacity. By exploiting the emerging non-Hermitian photonics design at an exceptional point, we demonstrate a microring laser producing a single-mode OAM vortex lasing with the ability to precisely define the topological charge of the OAM mode. The polarization associated with OAM lasing can be further manipulated on demand, creating a radially polarized vortex emission. Our OAM microlaser could find applications in the next generation of integrated optoelectronic devices for optical communications in both quantum and classical regimes. <s> BIB004 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VI. CHANNEL MODELS IN THE OPTICAL SPECTRUM <s> In this paper, metallic plasmonic nano-antennas are modeled and analyzed for wireless optical communication. More specifically, a unified mathematical framework is developed to investigate the performance in transmission and reception of metallic nano-dipole antennas. This framework takes into account the metal properties, i.e., its dynamic complex conductivity and permittivity; the propagation properties of surface plasmon polariton waves on the nano-antenna, i.e., their confinement factor and propagation length; and the antenna geometry, i.e., length and radius. The generated plasmonic current in reception and the total radiated power and efficiency in transmission are analytically derived by utilizing the framework. In addition to numerical results, the analytical models are validated by means of simulations with COMSOL Multi-physics. The developed framework will guide the design and development of novel nano-antennas suited for wireless optical communication. <s> BIB005 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VI. CHANNEL MODELS IN THE OPTICAL SPECTRUM <s> Wireless Networks on Chip (WNoC) consist of multiple independent cores interconnected by a smart combination of wired and wireless links. Wired interconnections have progressively moved from electrical tracks to nanophotonic waveguides in order to meet the demand for faster links. However, wireless links still largely rely on radio-frequency, millimeter-wave and, more recently, Terahertz-band communication, which offer a significantly lower bandwidth than their wired counterparts. To overcome this limitation, in light of the state of the art in optical nano-antennas, the use of wireless optical communication on-chip is proposed in this paper for the first time. Wireless optical links across cores can meet the demand for much higher data rates among cores, provide seamless wired and wireless transitions, and support multicast and broadcast transmissions among cores enabled by omnidirectional nano-antennas. To analyze the feasibility of this paradigm, in this paper, a multi-path channel model for on- chip light propagation is developed. More specifically, first, the channel frequency response in the case of line-of-sight propagation is obtained, by capturing the impact of spreading and absorption of light in silicon-based semiconductor materials. Then, the non-line-of-sight paths created by reflection at the material layer interfaces and diffraction at the core edges are modeled. Finally, a multi-path channel model is formulated, validated by means of electromagnetic simulations with COMSOL Multi-physics and utilized to analyze the main properties of on-chip wireless optical communication. <s> BIB006 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VI. CHANNEL MODELS IN THE OPTICAL SPECTRUM <s> Multi-core processors are likely to be a point of no return to meet the unending demand for increasing computational power. Nevertheless, the physical interconnection of many cores might currently represent the bottleneck toward kilo-core architectures. Optical wireless networks on-chip are therefore being considered as promising solutions to overcome the technological limits of wired interconnects. In this work, the spatial properties of the on-chip wireless channel are investigated through a ray tracing approach applied to a layered representation of the chip structure, highlighting the relationship between path loss, antenna positions and radiation properties. <s> BIB007 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VI. CHANNEL MODELS IN THE OPTICAL SPECTRUM <s> Optical wireless (OW) links have been recently proposed as an interconnection technology for multiple processing cores operating in parallel on the same chip. OW communication is also a mature option for indoor and outdoor applications. Design and analysis of networks with optical wireless links require a careful investigation of cross-link interference which plays a key-role on the performance and efficiency of systems that reuse the same channel for multiple parallel transmissions. In this paper we analyze the bit-error rate performance of OW links for on-chip applications with cross-link cochannel interference. As a novelty with respect to known literature on crosstalk in optical communications we consider asynchronous data transmission and address the system performance in case of heavy interference. Analytical methods are used to derive error probabilities as a function of signal-to-noise ratio (SNR), crosstalk power ratio, detection threshold, pulse shaping. Both exact and tight approximation methods are considered. As shown in the results, robustness against interference increases with asynchronous transmission, RZ pulse shaping and suitable design of detection threshold. It is also shown how the proposed analysis can be used to evaluate the reuse distance between two parallel links simultaneously transmitting in the same direction. <s> BIB008 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VI. CHANNEL MODELS IN THE OPTICAL SPECTRUM <s> Plasmonic nanoantennas integrated with silicon waveguides are a suitable solution for the implementation of on-chip wireless communications at optical frequencies. The use of optical wireless links simplifies on chip network design, mitigating switching and routing issues, while avoiding electro-optical conversion. In this work, we investigate the performance of multiple parallel on-chip optical interconnections by taking into account cross-link interference, which arises when the links reuse the same optical frequency. This analysis combines two approaches: FDTD simulation to evaluate both the radiation diagram of the antennas used in the optical links and the near-field coupling between transmit and receive neighbor antennas, and system-analysis to evaluate interference effects on link error probability. The results obtained will enable us to design the distances among parallel interconnections in order to preserve acceptable bit error probability. <s> BIB009
Moving up in the spectrum, the optical frequency bands, namely, infrared, visible and, to a lesser extent, ultra-violet frequencies, enter into consideration. Compared to THz technology, the wide adoption of fiber optical systems in wired telecommunication networks has led to the development of compact and energy efficient silicon-compatible lasers and photodetectors , BIB001 . In the last decade, the field of silicon photonics has further led to non-traditional ways to modulate light on-chip (e.g., on-chip Orbital Angular Momentum laser BIB004 ) and even to fully optical processors . Given that light is already utilized for intra-chip and inter-chip wired communications, the possibility to reuse some of the existing components to enable wireless optical communications in WNoC has recently been considered. In BIB002 , a wireless optical link for inter-chip communication is designed by means of full-wave electromagnetic simulations. The focus of the paper is on designing optical Yagi-Uda directional antennas and their matching network to an optical waveguide and on studying the benefits of directional optical wireless links when compared to both waveguided modes and omnnidirectional optical links. The distance considered in the analysis is in the order of tens of µm. The work does not include a channel model per se, and the peculiarities of chip-scale communications, with the exception of the size of the antennas, are not taken into account. Similarly, in BIB003 , a nano-patch antenna fed by a plasmonic waveguide is designed for broadband operation at near-and mid-infrared. Full-wave electromagnetic simulations are utilized to numerically optimize the design. In , an on-chip plasmonic horn nano-antenna is designed and its performance in terms of radiation efficiency and broadband nature is evaluated through full-wave electromagnetic simulations. Still in the context of optical nano-antenna design, but instead by following a fully analytical approach, in BIB005 , the performance in transmission and reception of optical nano-antennas is studied. Fundamental limits on the radiation efficiency and effective area of optical dipole nano-antennas are derived, and the impact of optics-only properties, such as the complex-valued electrical conductivity of metals at optical frequencies, on the generation of Surface Plasmon Polariton waves and, ultimately, on the achievable radiation efficiency, is explored. Other antenna-focused works include , where waveguide-fed Vivaldi antennas are designed and their performance in an optical wireless link on chip is investigated through full-wave numerical simulations. The first attempt at analytically modeling the intra-and inter-chip optical wireless channel was conducted in BIB006 . In this work, the channel response in the frequency domain was analytically derived and numerically validated through full-wave simulations by taking into account the impact of absorption by silicon, reflection and refraction at the chip material layers, and reflection and diffraction around the edges of the optical transmitters and receivers [ Figure 9 (a)]. The results show path-loss approaching 100 dB for distances in the order of 100 µm [ Figure 9 (b)]. Longer communication distances could be achieved, for example, by means of the aforementioned directional optical antenna designs BIB002 , . As opposed to analytical and full-wave numerical models, a ray-tracing approach is followed in BIB007 , showing similar results and performance bounds. Another aspect that has been recently given attention to is the role of multi-user interference and spatial multiplexing of directional optical links. Compared to the mmWave and, to a lesser extent, the THz band, the much smaller size of the antennas allows multiple of them being integrated per chip and potentially utilized simultaneously over different spatial beams. In BIB008 , BIB009 , the authors investigate both mutual coupling between neighboring optical antennas as well as the impact of interference on the achievable bit error rate. As we discussed in Sec. II-B, optical antennas can be designed to behave as directional or omnidirectional radiators, depending on the target communication distance in the specific application on-chip. While mainly theoretical for the time being, the fundamental understanding and models are ready for the communication and networking protocol designers to leverage.