reference
stringlengths 141
444k
| target
stringlengths 31
68k
|
---|---|
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Preface (B. Bollobas). Paul Erdos at Seventy-Five (B. Bollobas). Packing Smaller Graphs into a Graph (J. Akiyama, F. Nakada, S. Tokunaga). The Star Arboricity of Graphs (I. Algor, N. Alon). Graphs with a Small Number of Distinct Induced Subgraphs (N. Alon, B. Bollobas). Extensions of Networks with Given Diameter (J.-C. Bermond, K. Berrada, J. Bond). Confluence of Some Presentations Associated with Graphs (N. Biggs). Long Cycles in Graphs with No Subgraphs of Minimal Degree 3 (B. Bollobas, G. Brightwell). First Cycles in Random Directed Graph Processes (B. Bollobas, S. Rasmussen). Trigraphs (J.A. Bondy). On Clustering Problems with Connected Optima in Euclidean Spaces (E. Boros, P.L. Hammer). Some Sequences of Integers (P.J. Cameron). 1-Factorizing Regular Graphs of High Degree - An Improved Bound (A.G. Chetwynd, A.J.W. Hilton). Graphs with Small Bandwidth and Cutwidth (F.R.K. Chung, P.D. Seymour). Simplicial Decompositions of Graphs: A Survey of Applications (R. Diestel). On the Number of Distinct Induced Subgraphs of a Graph (P. Erdos, A. Hajnal). On the Number of Partitions of n Without a Given Subsum (I) (P. Erdos, J.L. Nicolas, A. Sarkozy). The First Cycles in an Evolving Graph (P. Flajolet, D.E. Knuth, B. Pittel). Covering the Complete Graph by Partitions (Z. Furedi). A Density Version of the Hales-Jewett Theorem for k = 3 (H. Furstenburg, Y. Katznelson). On the Path-Complete Bipartite Ramsey Number (R. Haggkvist). Towards a Solution of the Dinitz Problem? (R. Haggkvist). A Note on the Latin Squares with Restricted Support (R. Haggkvist). Pseudo-Random Hypergraphs (J. Haviland, A. Thomason). Bouquets of Geometric Lattices: Some Algebraic and Topological Aspects (M. Laurent, M. Deza). A Short Proof of a Theorem of Vamos on Matroid Representations (I. Leader). An On-Line Graph Coloring Algorithm with Sublinear Performance Ratio (L. Lovasz, M. Saks, W.T. Trotter). The Partite Construction and Ramsey Set Systems (J. Nesetril, V. Rodl). Scaffold Permutations (P. Rosenstiehl). Bounds on the Measurable Chromatic Number of R n (L.A. Szekely, N.C. Wormald). A Simple Linear Expected Time Algorithm for Finding a Hamilton Path (A. Thomason). Dense Expanders and Pseudo-Random Bipartite Graphs (A. Thomason). Forbidden Graphs for Degree and Neighbourhood Conditions (D.R. Woodall). <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize... <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Motivated by several applications, we introduce various distance measures between "top k lists." Some of these distance measures are metrics, while others are not. For each of these latter distance measures, we show that they are "almost" a metric in the following two seemingly unrelated aspects: ::: (i) they satisfy a relaxed version of the polygonal (hence, triangle) inequality, and ::: (ii) there is a metric with positive constant multiples that bound our measure above and below. ::: This is not a coincidence---we show that these two notions of almost being a metric are the same. Based on the second notion, we define two distance measures to be equivalent if they are bounded above and below by constant multiples of each other. We thereby identify a large and robust equivalence class of distance measures. ::: Besides the applications to the task of identifying good notions of (dis)similarity between two top k lists, our results imply polynomial-time constant-factor approximation algorithms for the rank aggregation problem with respect to a large class of distance measures. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> We study a map of the Internet (at the autonomous systems level), by introducing and using the method of k-shell decomposition and the methods of percolation theory and fractal geometry, to find a model for the structure of the Internet. In particular, our analysis uses information on the connectivity of the network shells to separate, in a unique (no parameters) way, the Internet into three subcomponents: (i) a nucleus that is a small (≈100 nodes), very well connected globally distributed subgraph; (ii) a fractal subcomponent that is able to connect the bulk of the Internet without congesting the nucleus, with self-similar properties and critical exponents predicted from percolation theory; and (iii) dendrite-like structures, usually isolated nodes that are connected to the rest of the network through the nucleus only. We show that our method of decomposition is robust and provides insight into the underlying structure of the Internet and its functional consequences. Our approach of decomposing the network is general and also useful when studying other complex networks. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Social networks are of interest to researchers in part because they are thought to mediate the flow of information in communities and organizations. Here we study the temporal dynamics of communication using on-line data, including e-mail communication among the faculty and staff of a large university over a two-year period. We formulate a temporal notion of"distance"in the underlying social network by measuring the minimum time required for information to spread from one node to another -- a concept that draws on the notion of vector-clocks from the study of distributed computing systems. We find that such temporal measures provide structural insights that are not apparent from analyses of the pure social network topology. In particular, we define the network backbone to be the subgraph consisting of edges on which information has the potential to flow the quickest. We find that the backbone is a sparse graph with a concentration of both highly embedded edges and long-range bridges -- a finding that sheds new light on the relationship between tie strength and connectivity in social networks. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Operators of online social networks are increasingly sharing potentially sensitive information about users and their relationships with advertisers, application developers, and data-mining researchers. Privacy is typically protected by anonymization, i.e., removing names, addresses, etc.We present a framework for analyzing privacy and anonymity in social networks and develop a new re-identification algorithm targeting anonymized social-network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate.Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy "sybil" nodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversary's auxiliary information is small. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85%) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Spreading of information, ideas or diseases can be conveniently modelled in the context of complex networks. An analysis now reveals that the most efficient spreaders are not always necessarily the most connected agents in a network. Instead, the position of an agent relative to the hierarchical topological organization of the network might be as important as its connectivity. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Social networks have emerged as a critical factor in information dissemination, search, marketing, expertise and influence discovery, and potentially an important tool for mobilizing people. Social media has made social networks ubiquitous, and also given researchers access to massive quantities of data for empirical analysis. These data sets offer a rich source of evidence for studying dynamics of individual and group behavior, the structure of networks and global patterns of the flow of information on them. However, in most previous studies, the structure of the underlying networks was not directly visible but had to be inferred from the flow of information from one individual to another. As a result, we do not yet understand dynamics of information spread on networks or how the structure of the network affects it. We address this gap by analyzing data from two popular social news sites. Specifically, we extract social networks of active users on Digg and Twitter, and track how interest in news stories spreads among them. We show that social networks play a crucial role in the spread of information on these sites, and that network structure affects dynamics of information flow. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Social influence can be described as power - the ability of a person to influence the thoughts or actions of others. Identifying influential users on online social networks such as Twitter has been actively studied recently. In this paper, we investigate a modified k-shell decomposition algorithm for computing user influence on Twitter. The input to this algorithm is the connection graph between users as defined by the follower relationship. User influence is measured by the k-shell level, which is the output of the k-shell decomposition algorithm. Our first insight is to modify this k-shell decomposition to assign logarithmic k-shell values to users, producing a measure of users that is surprisingly well distributed in a bell curve. Our second insight is to identify and remove peering relationships from the network to further differentiate users. In this paper, we include findings from our study. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Social networks play a fundamental role in the diffusion of information. However, there are two different ways of how information reaches a person in a network. Information reaches us through connections in our social networks, as well as through the influence external out-of-network sources, like the mainstream media. While most present models of information adoption in networks assume information only passes from a node to node via the edges of the underlying network, the recent availability of massive online social media data allows us to study this process in more detail. We present a model in which information can reach a node via the links of the social network or through the influence of external sources. We then develop an efficient model parameter fitting technique and apply the model to the emergence of URL mentions in the Twitter network. Using a complete one month trace of Twitter we study how information reaches the nodes of the network. We quantify the external influences over time and describe how these influences affect the information adoption. We discover that the information tends to "jump" across the network, which can only be explained as an effect of an unobservable external influence on the network. We find that only about 71% of the information volume in Twitter can be attributed to network diffusion, and the remaining 29% is due to external events and factors outside the network. <s> BIB011 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Current social media research mainly focuses on temporal trends of the information flow and on the topology of the social graph that facilitates the propagation of information. In this paper we study the effect of the content of the idea on the information propagation. We present an efficient hybrid approach based on a linear regression for predicting the spread of an idea in a given time frame. We show that a combination of content features with temporal and topological features minimizes prediction error. Our algorithm is evaluated on Twitter hashtags extracted from a dataset of more than 400 million tweets. We analyze the contribution and the limitations of the various feature types to the spread of information, demonstrating that content aspects can be used as strong predictors thus should not be disregarded. We also study the dependencies between global features such as graph topology and content features. <s> BIB012
|
In a pioneering study, BIB007 suggest information diffuses on Twitter-like social microblogging platforms in a similar manner as news media. They show that, over the original tweet and retweets, and regardless of the followers of the originator of the tweet, a tweet reaches to about 1,000 users on an average. It stimulates the notion that, such microblogging networks are hybrid in nature, where the characteristics of social and information networks get combined. Their dataset comprises 41.7 million Twitter users, 1.47 billion social followership edges and 106 million tweets. They observe that Twitter trends are different from traditional social network trends, with lower than expected degrees of separation, and non-power-law distribution of followers. The reciprocity of Twitter is low, compared to traditional social networks. However, the reciprocated relationships exhibit homophily BIB002 ] to an extent. They rank Twitter users by PageRank of followings, number of followers and retweets. They find that the rankings by PageRank and by number of followers are similar, but ranking by retweets is significantly different. They measure this by using an optimistic approach of the generalization of Kendall's tau proposed by BIB003 ], setting penalty p = 0. They observe that a significant proportion of live news that is of broadcasting nature (such as accidents and sports), breaks out on Twitter ahead of CNN, a traditional online media. They note that around 20% of Twitter users participate in trending topics, and around 15% of the participants participate in more than 10 topics in 4 months. They observe that the active periods of most trends are a week or shorter. They attempt to investigate whether favoritism exists in retweets. For this, assuming user j makes ) over all vertices having made / received k retweets. If followers tend to evenly retweet, then kY (k) ∼ 1. And kY (k) ∼ k if only a subset of followers retweet. Experimentally, they observe linear correlation to k, which indicates retweets to contain favoritism: people retweet only from a small number of people and only a subset of followers of a user tend to retweet. In a way, this indicates only a few users to influence the information to diffuse further via retweets, given the user originating the information with respect to the persons retweeting. [ BIB008 show that the most central or highly connected people are not necessarily the best spreaders of information; often, those located at the network core are. They identify the best spreaders by k-shell decomposition analysis BIB001 ] BIB004 [Seidman 1983]. They further show that, when more than one spreader are considered together, the distance between them plays a critical role in determining the spread level. They apply the Susceptible-Infectious-Recovered (SIR) and Susceptible-Infectious-Susceptible (SIS) models [Heesterbeek 2000] [Hethcote 2000] on four different social networks including an email network in a department of a university in London, a blogging community (LiveJournal.com), a contact network of inpatients in a Swedish hospital and "a network of actors that have co-starred in movies labeled by imdb.com as adult". They use a small value of β , "the probability that an infectious vertex will infect a susceptible neighbor", keeping the infected population fraction small. Using k-shell (k-core) decomposition, they assign coreness k S , an integer index (coreness index), to each vertex of degree k, that captures the depth (layer/k-shell) in the network that the vertex belongs to. The coreness index k S is assigned such that the more centrally the vertex is located in the graph, the higher is its k S value. The innermost vertices thereby form the graph core. If (k S , k) is the coreness and degree of vertex i (origin of the epidemic) and "γ(k S , k) the union of all the N(k S , k) such vertices", then the average population M i infected with the epidemic under SIR-based spreading, averaged over all such origins, is Their analysis finds three general results. (a) A number of poor spreaders exist among the hubs on the network periphery (large k, low k S ). (b) Infected nodes belonging to the same k-shell give rise to similar outbreaks of epidemic, irrespective of the degree of the origin of infection. (c) The "inner core of the network" comprises of the most efficient disease (information spreaders), independent of their degree. They empirically observe that the influence spreading behavior is better predicted by the k-shell index of a node, compared to the entire network considered as a whole, as well as compared to betweenness centrality. An outbreak starting at the network core (large k S ) finds many paths for the information to spread over the whole network, regardless of the degree of the vertex. In a subsequent work, BIB010 modify the k-shell decomposition analysis algorithm to use log-scale mapping, that produces fewer but more appropriate k-shell values. [ BIB005 ] propose a temporal notion of social network distance, using the shortest time needed for information to reach to vertex from another. They find that, structural information that is not evident from analyzing the topology of the social network, can be obtained from such temporal measures. They define a network backbone, a subgraph in which the information flows the quickest, and experimentally show that the network backbone for information propagation on a social network graph is sparse, with a mix of long-range bridges and strongly embedded edges. They demonstrate on two email datasets and user communications across Wikipedia admins and editors. To find the temporal notion of social network distance, they attempt to quantify how updated is each vertex v about each different vertex u at time t. For this, they try to determine the largest t ′ < t such that, information can reach, from vertex u starting at time t ′ , to v at or before time t. The view of v towards u at time t is the largest value of t ′ , denoted by φ v,t (u). They define "information latency of u with respect to v at time t" as "how much v's view of u is out-of-date at time t", quantified as (t − φ v,t (u)). Thus iterating over all vertices, they take the view of v to all the vertices in the graph at time t, and represent it as a single vector φ v,t = (φ v,t (u) : u ∈ V ). They define φ v,t as the vector clock of each vertex v at time t. φ v,t is updated whenever v receives a communication. They define the instantaneous backbone of a network using the concept of essential edges. In the backbone, "an edge (v, w) is essential at time t if the value φ w,t (v) is the result of a vector-clock update directly from v, via some communication event (v, w,t ′ ), where t ′ < t". Intuitively, an edge (v, w) becomes essential if the most updated view of the target (w) of the source (v) is via a direct communication over the edge, rather than via an indirect path over other edges. They define the backbone H t of the graph at time t to have the vertex set V , and the edge set from the original graph G essential at time t. Using this, and assuming a perfectly periodic communication pattern of vertex pairs, they develop a notion of aggregate backbone by aggregating the communication over the entire period of observation. For each edge (v, w) in H t where ρ v,w > 0 (v has sent w at least one message) within time period [0, T ], the delay δ v,w is defined for the edge as T /ρ v,w , which simply approximates the communication from v to w as temporally evenly spaced. They assign weight δ v,w to each edge (v, w), obtaining G δ from G. In this aggregate setting, where communications are spaced evenly, the path where the sum of the delays is minimum is the path over which information would reach the fastest between that pair of vertices. They define essential edges in the aggregate sense in G delta , and define H * , an aggregate backbone, constituted using only these essential aggregate edges. They define the range of an edge (v, w) as the shortest unweighted alternate path from v to w over the social network, if e was deleted. On a typical social network, the value of this is often observed to be 2 as most pairs of social connections tend to have common (shared) friends. They define the embeddedness of an edge (v, w) is intuitively the fraction of neighbors common to both v and w. Formally, if N v and N w respectively denote the neighbor set of v and w, then embeddedness of e is defined as Intuitively, endpoints of edges with high embedding have many common neighbors, hence occupy dense clusters. Experimentally, they find that highly embedded edges are over-represented both in instantaneous and aggregate backbones. These represent edges with high rates of communications. Hence, presence of such edges in the backbone leads to fast information diffusion. They also observe that, increase in node-dependent delays (delays ε introduced at nodes, in addition to the edge delay δ v,w ) in leading to denser backbones. As that happens, the significance of quick indirect paths diminish. They note that, to influence the potential information flow, a practical method for individuals is to consider varying the communication rates by simple rules. [ ] study the impact of user homophily on information diffusion on Twitter data. They hypothesize that, homophily affects the core mechanism behind social information propagation, by structuring the ego-networks and impacting their communication behavior of individuals. They follow a three-step approach. First, for the full social graph (baseline) and filtered graphs using attributes such as activity behavior, location etc., they extract diffusion characteristics along categories such as user-based (volume, number of seeds), topology-based (reach, spread) and time (rate). Second, to predict information diffusion for future time slices, they propose a dynamic Bayesian network. Third, they use the ability of the predicted characteristics of explaining the ground-truth of observed information diffusion, to quantify the impact of homophily. They empirically find that, the cases where homophily was considered, could explain information diffusion and external trends by a margin of 15%-25% lower distortion than the cases where it was not considered. They consider a social action set O = {O 1 , O 2 , ...} (such as, posting a tweet) and a set of attributes A = {a k } (location, organization etc.). They consider four user attributes: location, information role (generators, mediators, receptors), content creation (those making self-related posts versus informers), and activity behavior (actions performed on the social network over a given time period). A pair of users are homophilous if at least one of their attributes matches more than the random expectation of match in the network. They construct an induced subgraph G(a k ) of G by selecting vertices, where G(a k = v) where v is the value of attribute a k ∈ A in the selected vertices. Edges in G are selected in G(a k ), where both the endpoint vertices of the edge are included in G(a k ). The authors define s N (θ ), a "diffusion series on topic θ over time slices t 1 to t N , as a directed acyclic graph", in which the vertices correspond to a subset of social network users, involved in social action O r on topic θ within time t 1 and t N . Vertices are assigned to slots: all vertices associated with time slice t m (t 1 < t m < t N ) are assigned slot l m . They subsequently attempt to characterize diffusion. They extract diffusion characteristics on θ at time slice t N , from each diffusion collection S N (θ ) (defined as {s N (θ )}) and {S N;a k (θ )}, as d N (θ ) and {D N;a k (θ )} respectively. They use eight different measures to quantify diffusion at each given time slice t N : the volume v N (θ ) with respect to topic θ (total volume of contagion present in the graph); participation p N (θ ) that involve in the information diffusion and further trigger other users to diffuse information; dissemination δ N (θ ) that act as seed users of the information diffusion due to unobservable external influence; reach r N (θ ) to which extent a particular topic θ reaches to users by the fraction of slots; spread as ratio of the maximum count of informed vertices found over all slots in the diffusion collection, to the total user count; cascade instances c N (θ ) that defines the fraction of slots in s N (θ ) ∈ S N (θ ) in which the number of new users at slot l m is higher than the previous slot l m−1 ; collection size α N (θ ) as the proportion of the number of diffusion series to the number of connected components; and rate γ N (θ ) as the speed of information diffusion on θ in S N (θ ). For each diffusion collection S N θ and {S N;a k (θ )}, they predict at time slice t N , which users have a higher likelihood of repeating a social action taken at time slice t N+1 . This gives diffusion collections at t N+1 as:Ŝ N+1 (θ ) and {Ŝ N;a k (θ )}∀a k ∈ A. They propose a dynamic Bayesian network, and model the likelihood of action O i at t N+1 using environmental features (activity of a given individual and their friends on a topic θ , and the popularity of the topic θ in the previous time slot t N ) represented by F i,N (θ ) and diffusion collection S i,N+1 (θ ). The goal is to estimate the expectation of social actions: Using first order Markov property, they rewrite this as a probability function: They use the "Viterbi algorithm on the observation-state transition matrix to determine the most likely sequence at t N+1 ", thus predicting the observed action (the first term). They predict the second term, the hidden states, as P( . They subsequently substitute the probability of emission P(O i,N+1 |S i,N+1 ) and P(S i,N+1 |S i,N , F i,N ) to estimate the observed action of u i :Ô i,N+1 . They repeat this for each user for time slice t N+1 . Using G and G(a k ), they "associate edges between the predicted user set, and the users in each diffusion series for the diffusion collections at t N ". They thus obtain the diffusion collection t N+1 , i.e.,Ŝ N+1 (θ ) andŜ N+1;a k (θ ). They measure the distortion between actual diffusion characteristics with the predicted, at t n+1 , using: (a) saturation measurement and (b) utility measurement. Intuitively, to measure the informa-tion content that has diffused into the network on topic θ , saturation measurement is used. Utility measurement, on the other hand, attempts to correlate the prediction with external phenomena such as search and world news trend. Using cumulative distribution functions (CDF) of diffusion volume, they model search and news trend measurement models using Kolmogorov-Smirnov (KS) statistic, given as max(|X − Y |) for a given diffusion D(X,Y ), where X and Y are two vertices of the graph. [ BIB011 observe that real-world information can spread via two different ways: (a) over social network connections and (b) over external sources outside the network, such as the mainstream media. They point out that most of the literature assumes that information only passes over the social network connections, which may not be entirely accurate. They model information propagation, considering that information can reach to individuals along both the possible ways. They develop a model parameter fitting technique using hazard functions [Elandt- , to quantify the level of external exposure and influence. In their setting, event profile captures the "influence of external sources on the network as a function of time". With time, nodes receive "streams of varying intensity of external exposures, governed by event profile λ ext (t)". A node can get infected by each of the exposures, and eventually the node either becomes infected, or the arrival of exposures cease. Neighbors receive exposures from infected nodes. They define exposure curve η(x), which determines how likely it is for a node to get infected with arrival of each exposure, and set out to find the shape of the curve, as well as infer how many exposures external sources generate over time. They model internal exposures using an internal hazard function, as Here i and j are neighbors, and "time t has passed since node i was infected". Intuitively, in their setting, λ int effectively models the time taken by a node to understand that one of its nodes have become infected. The "expected number of internal exposures node i receives by time t" can be derived by summing up these exposures. They model exposure to unobserved external information sources, with varying intensities over time, as event profile, as λ ext (t)dt ≡ P(i receives exposure j ∈ [t,t + dt)). The above holds "for any node i, where t is the time elapsed since the current contagion had first appeared in the network". They "model the arrival of exposures as a binomial distribution". Since users receive both internal and external exposures simultaneously, they use the average of λ ext (t) + λ (i) int (t) to "approximate the flux of exposures as constant in time, such that each time interval has an equal probability of arrival of exposures". The "sum of these events is a standard binomial random variable". If a node receives x exposures where the exposure curve is η(x), then η(x) is computed as: Here ρ 1 ∈ (0, 1] and ρ 2 > 0. Note that, η(0) = 0. This implies, a node can be infected only after being exposed to a contagion. The function is unimodal with an exponential tail. Hence there a critical mass of exposures exists when the contagion is most infectious. This is followed by decay, caused by becoming overexposed/tiresome. Importantly, ρ 1 = max x η(x) measures the infectiousness of a contagion in the network, and ρ 2 = argmax x η(x) measures the contagion's enduring relevancy. For a given node i, the infection time distribution can be built as following. Let F (i) (t) ≡ P(τ i ≤ t) denote "the probability of node i being infected by time t", where node i has been infected at time τ i . Using P (i) exp (n;t), F (i) (t) is derived as Although F (i) (t) is "analogous to the cumulative distribution function of infection probability", it is "not actually a distribution": lim x→∞ η(x) = 0 leads to lim t→∞ F(t) < 1. Their model also ensures that the chance of a node never becoming infected is non-zero, which is realistic. They apply the model to the URLs emerging on Twitter. They observe that information jumps across the Twitter network that the social edges cannot explain, and is necessarily caused by unobservable external influences. They quantify the information jump, noting that around 71% of information diffuses over the Twitter network, while the other 29% happens over external events outside. [ ] create an interactive visualization tool, to visually summarize the opinion diffusion by using a combination of a Sankey [Sankey 1898] graph and a tailored density map, at a topic level. Using an information diffusion model that uses a combination of reach (the average number of people influenced by message published by a given user), amplification (the likelihood that audience responds to a message) and network score (the influence of a users' audience) to measure user influence levels, they characterize the propagation of opinions among many users regarding different topics on social media. BIB006 aim to identify (de-anonymize) users across social networking platforms. They hypothesize that, identifying the profiles of users across multiple social networking platforms would provide more insights into the information diffusion process, by observing the diffusion of information over these multiple platforms at a given time. They demonstrate their hypothesis using Twitter and Flickr in combination. BIB012 attempt to predict the spread of ideas on Twitter, combining topological and temporal features with content features, for minimizing errors. BIB009 empirically study the characteristics of news spreading on several popular social networks, such as Twitter and Digg. propose a multi-class classification model to identify popular messages on Twitter, by predicting retweet quantities, from TF-IDF (term frequency and inverted document frequency) and LDA, along with social properties of users.
|
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Models of collective behavior are developed for situations where actors have two alternatives and the costs and/or benefits of each depend on how many other actors choose which alternative. The key concept is that of "threshold": the number or proportion of others who must make one decision before a given actor does so; this is the point where net benefits begin to exceed net costs for that particular actor. Beginning with a frequency distribution of thresholds, the models allow calculation of the ultimate or "equilibrium" number making each decision. The stability of equilibrium results against various possible changes in threshold distributions is considered. Stress is placed on the importance of exact distributions distributions for outcomes. Groups with similar average preferences may generate very different results; hence it is hazardous to infer individual dispositions from aggregate outcomes or to assume that behavior was directed by ultimately agreed-upon norms. Suggested applications are to riot ... <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> We consider the combinatorial optimization problem of finding the most influential nodes on a large-scale social network for two widely-used fundamental stochastic diffusion models. It was shown that a natural greedy strategy can give a good approximate solution to this optimization problem. However, a conventional method under the greedy algorithm needs a large amount of computation, since it estimates the marginal gains for the expected number of nodes influenced by a set of nodes by simulating the random process of each model many times. In this paper, we propose a method of efficiently estimating all those quantities on the basis of bond percolation and graph theory, and apply it to approximately solving the optimization problem under the greedy algorithm. Using real-world large-scale networks including blog networks, we experimentally demonstrate that the proposed method can outperform the conventional method, and achieve a large reduction in computational cost. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Social influence determines to a large extent what we adopt and when we adopt it. This is just as true in the digital domain as it is in real life, and has become of increasing importance due to the deluge of user-created content on the Internet. In this paper, we present an empirical study of user-to-user content transfer occurring in the context of a time-evolving social network in Second Life, a massively multiplayer virtual world. We identify and model social influence based on the change in adoption rate following the actions of one's friends and find that the social network plays a significant role in the adoption of content. Adoption rates quicken as the number of friends adopting increases and this effect varies with the connectivity of a particular user. We further find that sharing among friends occurs more rapidly than sharing among strangers, but that content that diffuses primarily through social influence tends to have a more limited audience. Finally, we examine the role of individuals, finding that some play a more active role in distributing content than others, but that these influencers are distinct from the early adopters. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Directed links in social media could represent anything from intimate friendships to common interests, or even a passion for breaking news or celebrity gossip. Such directed links determine the flow of information and hence indicate a user's influence on others — a concept that is crucial in sociology and viral marketing. In this paper, using a large amount of data collected from Twitter, we present an in-depth comparison of three measures of influence: indegree, retweets, and mentions. Based on these measures, we investigate the dynamics of user influence across topics and time. We make several interesting observations. First, popular users who have high indegree are not necessarily influential in terms of spawning retweets or mentions. Second, most influential users can hold significant influence over a variety of topics. Third, influence is not gained spontaneously or accidentally, but through concerted effort such as limiting tweets to a single topic. We believe that these findings provide new insights for viral marketing and suggest that topological measures such as indegree alone reveals very little about the influence of a user. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Influence is a complex and subtle force that governs the dynamics of social networks as well as the behaviors of involved users. Understanding influence can benefit various applications such as viral marketing, recommendation, and information retrieval. However, most existing works on social influence analysis have focused on verifying the existence of social influence. Few works systematically investigate how to mine the strength of direct and indirect influence between nodes in heterogeneous networks. To address the problem, we propose a generative graphical model which utilizes the heterogeneous link information and the textual content associated with each node in the network to mine topic-level direct influence. Based on the learned direct influence, a topic-level influence propagation and aggregation algorithm is proposed to derive the indirect influence between nodes. We further study how the discovered topic-level influence can help the prediction of user behaviors. We validate the approach on three different genres of data sets: Twitter, Digg, and citation networks. Qualitatively, our approach can discover interesting influence patterns in heterogeneous networks. Quantitatively, the learned topic-level influence can greatly improve the accuracy of user behavior prediction. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Social media forms a central domain for the production and dissemination of real-time information. Even though such flows of information have traditionally been thought of as diffusion processes over social networks, the underlying phenomena are the result of a complex web of interactions among numerous participants. Here we develop the Linear Influence Model where rather than requiring the knowledge of the social network and then modeling the diffusion by predicting which node will influence which other nodes in the network, we focus on modeling the global influence of a node on the rate of diffusion through the (implicit) network. We model the number of newly infected nodes as a function of which other nodes got infected in the past. For each node we estimate an influence function that quantifies how many subsequent infections can be attributed to the influence of that node over time. A nonparametric formulation of the model leads to a simple least squares problem that can be solved on large datasets. We validate our model on a set of 500 million tweets and a set of 170 million news articles and blog posts. We show that the Linear Influence Model accurately models influences of nodes and reliably predicts the temporal dynamics of information diffusion. We find that patterns of influence of individual participants differ significantly depending on the type of the node and the topic of the information. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Recently, there has been tremendous interest in the phenomenon of influence propagation in social networks. The studies in this area assume they have as input to their problems a social graph with edges labeled with probabilities of influence between users. However, the question of where these probabilities come from or how they can be computed from real social network data has been largely ignored until now. Thus it is interesting to ask whether from a social graph and a log of actions by its users, one can build models of influence. This is the main problem attacked in this paper. In addition to proposing models and algorithms for learning the model parameters and for testing the learned models to make predictions, we also develop techniques for predicting the time by which a user may be expected to perform an action. We validate our ideas and techniques using the Flickr data set consisting of a social graph with 1.3M nodes, 40M edges, and an action log consisting of 35M tuples referring to 300K distinct actions. Beyond showing that there is genuine influence happening in a real social network, we show that our techniques have excellent prediction performance. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> In this paper we investigate the attributes and relative influence of 1.6M Twitter users by tracking 74 million diffusion events that took place on the Twitter follower graph over a two month interval in 2009. Unsurprisingly, we find that the largest cascades tend to be generated by users who have been influential in the past and who have a large number of followers. We also find that URLs that were rated more interesting and/or elicited more positive feelings by workers on Mechanical Turk were more likely to spread. In spite of these intuitive results, however, we find that predictions of which particular user or URL will generate large cascades are relatively unreliable. We conclude, therefore, that word-of-mouth diffusion can only be harnessed reliably by targeting large numbers of potential influencers, thereby capturing average effects. Finally, we consider a family of hypothetical marketing strategies, defined by the relative cost of identifying versus compensating potential "influencers." We find that although under some circumstances, the most influential users are also the most cost-effective, under a wide range of plausible assumptions the most cost-effective performance can be realized using "ordinary influencers"---individuals who exert average or even less-than-average influence. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these mediums on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> As a new communication paradigm, social media has promoted information dissemination in social networks. Previous research has identified several content-related features as well as user and network characteristics that may drive information diffusion. However, little research has focused on the relationship between emotions and information diffusion in a social media setting. In this paper, we examine whether sentiment occurring in social media content is associated with a user's information sharing behavior. We carry out our research in the context of political communication on Twitter. Based on two data sets of more than 165,000 tweets in total, we find that emotionally charged Twitter messages tend to be retweeted more often and more quickly compared to neutral ones. As a practical implication, companies should pay more attention to the analysis of sentiment related to their brands and products in social media communication as well as in designing advertising content that triggers emotions. <s> BIB011 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> This study investigates the communication patterns and network structure of influential opinion leaders on Twitter during the 2011 Seoul mayoral elections. Among the two candidates, we focus on the usage pattern of Wonsoon Park, who actively used Twitter during the election campaign. We analyzed the network structure of candidate Park and his 15 Twitter mentors during the election period (September 26, 2011 - October 26, 2011). The gathered data consists of 19,227 tweets from 8,547 users who were responded to by one of the 17 selected opinion leaders through mentions (@) or retweets (RT). To find the authorities and hubs, which play a crucial role in information propagation, the HITS algorithm was used to quantify the influence exerted by the opinion leaders. In addition, social network triads were used to identify the communication patterns between individual users on Twitter. Results of the analysis showed that the structure of the communication patterns in Twitter were mostly fragmented rather than transitive. This signified that communication occurred from, or converged to, a single node, rather than circulating through multiple nodes during the election period. The majority of the network structures were fragmented, or one-way conversations. In other words, communication happened in the form of aggregation and propagation, rather than sharing and circulating various ideas. <s> BIB012 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Social influence analysis on microblog networks, such as Twitter, has been playing a crucial role in online advertising and brand management. While most previous influence analysis schemes rely only on the links between users to find key influencers, they omit the important text content created by the users. As a result, there is no way to differentiate the social influence in different aspects of life (topics). Although a few prior works do support topic-specific influence analysis, they either separate the analysis of content from the analysis of network structure, or assume that content is the only cause of links, which is clearly an inappropriate assumption for microblog networks. To address the limitations of the previous approaches, we propose a novel Followship-LDA (FLDA) model, which integrates both content topic discovery and social influence analysis in the same generative process. This model properly captures the content-related and content-independent reasons why a user follows another in a microblog network. We demonstrate that FLDA produces results with significantly better precision than existing approaches. Furthermore, we propose a distributed Gibbs sampling algorithm for FLDA, and demonstrate that it provides excellent scalability on large clusters. Finally, we incorporate the FLDA model in a general search framework for topic-specific influencers. A user freely expresses his/her interest by typing a few keywords, the search framework will return a ranked list of key influencers that satisfy the user's interest. <s> BIB013 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> The use of Social Media, particularly microblogging platforms such as Twitter, has proven to be an effective channel for promoting ideas to online audiences. In a world where information can bias public opinion it is essential to analyse the propagation and influence of information in large-scale networks. Recent research studying social media data to rank users by topical relevance have largely focused on the “retweet”, “following” and “mention” relations. In this paper we propose the use of semantic profiles for deriving influential users based on the retweet subgraph of the Twitter graph. We introduce a variation of the PageRank algorithm for analysing users' topical and entity influence based on the topical/entity relevance of a retweet relation. Experimental results show that our approach outperforms related algorithms including HITS, InDegree and Topic-Sensitive PageRank. We also introduce VisInfluence, a visualisation platform for presenting top influential users based on a topical query need. <s> BIB014
|
Social influence plays a significant role in information diffusion dynamics BIB001 [ BIB002 ]. Research has attempted to investigate information cascade flow along underlying social connection graphs, and analyze the role of influence in such propagation. BIB005 ] explore influence on Twitter based on indegree, mentions and retweets. They find that individuals with high indegree do not necessarily generate many mentions and retweets. They observe that while majority of influential users tend to influence several topics, influence is gathered through focused efforts, such as limiting tweets to one topic. BIB009 ] study influencing behavior in terms of cascade spread on Twitter. They find that the past influence of users and the interestingness of content can be used to predict the influencers. They observe that although URLS rated interesting, and content by influential users, spread more than average, no reliable method exists for predicting which user or URL will generate large cascades. BIB006 ] study social influence in large scale networks using a topical sum-product algorithm, and investigate the impact of topics in social influence propagation. ] study the role of passivity and propose a PageRank like measure to find influence on Twitter. ] too propose a PageRank like measure to quantify influence on Twitter, based on link reciprocity and homophily. BIB013 and BIB014 ] conduct topic-specific influence analyses for microblogs. [ Galuba et al. 2010 ] characterize the propagation of URLs on Twitter, and predict information cascades, factoring for the influence of users on one another. Tracking 2.7 million users exchanging over 15 million URLs, they show statistical regularities to be present in social graph, activity of users, URL cascade structure and communication dynamics. They look at URL sharing activities such as URL mentions by users in their tweets, URL popularity (how frequently they appears in tweets) and user activity (how frequently they mention URLs). They define two information cascade types. In F-cascade, the flow of URLs are constrained to the follower graph. They draw an edge between a vertex pair v1 and v2 iff: (a) "v1 and v2 tweeted about URL u", (b) "v1 mentioned u before v2", and (c) "v2 is a follower of v1". In RT-cascade, they use a who-credits-whom model. They disregard the follower graph, and draw an edge between v1 and v2 iff: (a) "v1 tweeted about URL u", (b) "v1 mentioned u before v2", and (c) "v2 credited v1 as the source of u". Using this, they proposed a propagation model predicting which URLs are likely to be mentioned by which users. They construct two information diffusion models. The At-Least-One (ALO) model assumes it sufficient to cause a user to tweet by influence of one user. Retweet probability in ALO model is computed as is the "baseline probability of user i tweeting any URL" and γ u ∈ [0, 1] is the virality of URL u. Intuitively, A is the probability of one of the following, given u is a viral URL (γ u ): (a) followee j(α ji ) has influenced user i and tweeted u with probability p u j , or (b) user i tweets it under influence of an unobserved entity (or tweets spontaneously). The time-dependent component T is defined using a log-normal distribution, given complementary error function er f c, as The linear threshold model (LT) they propose generalizes over ALO. The cumulative influence from all the followees need to exceed a per-node threshold they introduce, for the user to tweet. The A component is therefore replaced by The sigmoid s(x) = 1 1+e −a(b−x) serves as a continuous thresholding function. They optimize parameters by training over using an iterative gradient ascent method, and infer the accuracy of prediction of the information (URL) cascades using F-score -the harmonic mean of precision and recall. BIB010 ] quantify the causal effect of social networking mediums in disseminating information, by identifying who influences whom, as well as by exploring whether individuals would propagate the same information if the social signals were absent. They come up with two interesting findings, performing field-experiments on information sharing behavior over 253 million subjects on Facebook, that visited the site at least once between August 14 th to October 4 th , 2010. (a) They find that the ones exposed to given information on social media, are significantly likely to participate in propagate the information online, and do so sooner than those who are not exposed. (b) They further show that, while the stronger ties are more influential at an individual level, the abundance of weak ties ] are more responsible for novel information propagation, indicating that a dominant role is played by the weak ties in online information dissemination. Their experiment focuses on finding how much exposure of a URL to a user is needed on their Facebook (feed) (a dashboard on the Facebook user pane, where the user is presented with information content, and a platform-level capability to share content with others), for the user to share the URL, beyond the expected correlations among Facebook friends. Before displaying, they randomly assign subject-URL pairs to feed versus no-feed conditions, such that the number of no-feed is twice the number of feed. Stories that are assigned the no-feed condition, but have a URL, are never displayed feed. And the ones assigned the feed condition are displayed on the user feed and are never removed. They measure how exposure increases sharing behavior. They find that sharing has a likelihood of 0.191% in condition of feed and 0.025% in no-feed. They note that the the likelihood of sharing is 7.37 more for those in the feed condition. They observe that links tend to be shared immediately upon exposure by those in the feed condition; however, those in no-feed condition share links over a marginally longer time period. They observe that link-sharing probability goes up as more of one's contacts share a given link, under feed conditions. On the other hand, in no-feed, a link shared by multiple friends is likely to be shared by a user, even if the user has not observed the sharing behavior of friends. This indicates a mixture of internal influence and external correlation in information (link) sharing behavior. The authors explore the impact of strength of ties in the diffusion of the information (URL sharing). Studying individuals who have only one friend that has shared a link previously, they observe that both for feed and no-feed conditions, link sharing is more likely by an individual, when her friend who shared happens to be a strong tie. This effect is seen to be more prominent in no-feed, indicating that strength of ties is a can better predict activities with external correlation than predicting influence on feed. They observe that, "individuals are more likely to share content when influenced by their stronger ties on their feed, and share content under such influence that they would not otherwise share". They further observe that the strength of weak ties ] plays a significant role in consuming and transmitting information that is not likely to be transmitted and get exposed to much of the network otherwise, which increases the information propagation diversity. BIB007 propose an approach to model the "global influence of a node on the rate of information diffusion through the underlying social network". To this, they propose Linear Influence Model (LIM), in which a newly infected (informed) node is modeled as a "function of other nodes infected in the past". For each node, they "estimate an influence function, to model the number of subsequent infections as a function of the other nodes infected in the past". They formulate their model in a non-parametric manner, transforming their setting into a simple least squares problem. This can scale to solving large datasets. They validate their model on 500 million tweets and 170 million news articles and blog posts. They show that node influences are modeled accurately by LIM, and the temporal dynamics of diffusion of information are also predicted reliably. They observe that the influence patterns of participants significantly differ with node types and information topics. In LIM, as information diffuses, a node u is treated as infected from the point of time t u that it adopts (first mentions) the information. This enables LIM to be independent of the underlying network. Volume V (t) is defined in their setting to be the "number of nodes mentioning the information at time t". They "model the volume over time as a function of which other nodes have mentioned the information beforehand". They assign a "non-negative influence function" I u (l) to each node, that denotes the number of follow-up mentions, l time units beyond the adoption of the information by node u. The volume V (t) then becomes "the sum of properly aligned influence functions of nodes u, at time t u (t u < t)": Here A(t) is the set of nodes that are "already active (infected, influenced)". They propose two approaches for modeling I u (l). In a parametric approach, they propose that "I u (l) would follow a specific parametric form", such as a exponential I u (l) = c u e −λ u l or power law I u (l) = c u l −α u , and parameters will depend upon node u. They observe that the drawback of the parametric approach is that, it makes the over-simplified assumption that all the nodes would follow the same parametric form. In a non-parametric approach, they do not assume any shape of the influence functions; the appropriate shapes are found by the the model estimation procedure. They consider time as a discrete vector of length L (a total of L time slots), where the "l th value represents the value of I u (l)". To estimate the LIM model parameters, they start by marking M u,k (t) = 1 if k, the contagion, reached node u at time t, and M u,k (t) = 0 otherwise. Since "volume V t (k) of contagion k at time t is defined as the number of nodes infected by k at time t", they have They subsequently rectify their model to account for the information recency (novelty) phenomenon, that nodes tend to ignore old and obsolete information and adopt recent and novel information. To model how much more or less influential a node is when it mentions the information, they use a multiplicative factor α(t). This is the α-LIM model, represented as: Here "α(t) is the same for all contagions", and is is expected to "start low, quickly peak and slowly decay". They note that the "resulting matrix equation is convex in in I u (l) when α(t) is fixed and in α(t) when I u (l) is fixed". Hence for estimating the "I u (l) and T values of vector α(t)", they apply coordinate descent, iterating between "fixing α(t) and solving for I u (l), and then fixing I u (l) and solving for α(t)". They also account for imitation, where everyone talks about a popular information, introducing the notion of latent volume: the volume which is caused by factors other than influence. They add b(t), a factor to model the latent volume, and thereby create the B-LIM model, which is linear in I u (l) and b(t), as [Yang and Counts 2010] explore three core properties of social network information diffusion, namely speed, scale and range. They collect Twitter data from July 8 th 2009 to August 8 th 2009, for 3, 243, 437 unique users and 22, 241, 221 posts. They explore the ongoing social interactions of users on Twitter, as denoted by the @username mentions (replies) and retweets, which represents active user interaction. To measure how topics propagate through network structures in Twitter, they construct a diffusion network based on mentions. That is, they create an edge from A to B, if B mentions A in her tweet that contains topic C that A had talked about earlier. Thus, they approximate the path of person A diffusing information about topic C. They develop models for speed, scale and range. For speed analysis, they attempt to understand whether and when followers would be influenced and thereby reply, retweet or otherwise mention the original tweet. They investigate the impact of user and tweet features on the speed of diffusion, using the regression model of . They observe that "some properties of tweets predict greater information propagation, but user properties, and specifically the rate that a user is historically mentioned, are equal or stronger predictors". For scale analysis, they attempt to understand how many people in the network mentioned the same topics as the neighbors of the topic originator. They find the number of mentions of a user to be the strongest predictor for information propagation speed (how quickly a tweet produces an offspring tweet) and scale (the number of offspring tweets a given tweet produces). For range analysis, they trace topics through the propagation chains, and count the number of hops. They observe that the range of information propagation (the number of social hops that information reaches on a diffusion network) is tied to the number of user mentions and when the tweets come in the observation sequence. The tweets that come later often are seen to be more influential: those travel further over the network. [ BIB008 ] build an influence model using the Flickr social network graph and user action logs. They propose a technique to predict the time within which a given user would be expected to conduct an action. Other studies, such as BIB004 ], BIB003 ], and , provide significant insights into flow of information and influence, along social edges, over Twitter user interactions. Further, other research works have attempted to model influence of content generated by users, on content generated by other users. ], for instance, explores bloggers' networks for modeling influence propagation. BIB011 explore the correlation of sentiments that Twitter users express and their information sharing behavior, experimenting on political communication data. From 2011 Seoul (Korea) mayoral elections data of a particular candidate who had used Twitter extensively, BIB012 show that, rather than sharing and circulating several ideas, the communica-tion had taken place in form of aggregation and propagation. The communication pattern structures were fragmented rather than transitive, signifying that during the election period, the communication in general had occurred from or converged to a single node, and mostly did not circulate through multiple nodes.
|
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> We present two methodologies for the detection of emerging trends in the area of textual data mining. These manual methods are intended to help us improve the performance of our existing fully automatic trend detection system [3]. The first methodology uses citations traces with pruning metrics to generate a document set for an emerging trend. Following this, threshold values are tested to determine the year that the trend emerges. The second methodology uses web resources to identify incipient emerging trends. We demonstrate with a confidence level of 99% that our second approach results in a significant improvement in the precision of trend detection. Lastly we propose the integration of these methods for both the improvement of our existing fully automatic approach as well as in the deployment of our semi-automated CIMEL [20] prototype that employs emerging trends detection to enhance multimedia-based Computer Science education. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Most of the existing document and web search engines rely on keyword-based queries. To find matches, these queries are processed using retrieval algorithms that rely on word frequencies, topic recentness, document authority, and (in some cases) available ontologies. In this paper, we propose an innovative approach to exploring text collections using a novel keywords-by-concepts (KbC) graph, which supports navigation using domain-specific concepts as well as keywords that are characterizing the text corpus. The KbC graph is a weighted graph, created by tightly integrating keywords extracted from documents and concepts obtained from domain taxonomies. Documents in the corpus are associated to the nodes of the graph based on evidence supporting contextual relevance; thus, the KbC graph supports contextually informed access to these documents. In this paper, we also present CoSeNa (Context-based Search and Navigation) system that leverages the KbC model as the basis for document exploration and retrieval as well as contextually-informed media integration. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> We describe a system that monitors social and mainstream media to determine shifts in what people are thinking about a product or company. We process over 100,000 news articles, blog posts, review sites, and tweets a day for mentions of items (e.g., products) of interest, extract phrases that are mentioned near them, and determine which of the phrases are of greatest possible interest to, for example, brand managers. Case studies show a good ability to rapidly pinpoint emerging subjects buried deep in large volumes of data and then highlight those that are rising or falling in significance as they relate to the firms interests. The tool and algorithm improves the signal-to-noise ratio and pinpoints precisely the opportunities and risks that matter most to communications professionals and their organizations. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a preliminary study of certain social phenomenons, such as the dissemination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> We analyze microblog posts generated during two recent, concurrent emergency events in North America via Twitter, a popular microblogging service. We focus on communications broadcast by people who were "on the ground" during the Oklahoma Grassfires of April 2009 and the Red River Floods that occurred in March and April 2009, and identify information that may contribute to enhancing situational awareness (SA). This work aims to inform next steps for extracting useful, relevant information during emergencies using information extraction (IE) techniques. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Twitter, Facebook, and other related systems that we call social awareness streams are rapidly changing the information and communication dynamics of our society. These systems, where hundreds of millions of users share short messages in real time, expose the aggregate interests and attention of global and local communities. In particular, emerging temporal trends in these systems, especially those related to a single geographic area, are a significant and revealing source of information for, and about, a local community. This study makes two essential contributions for interpreting emerging temporal trends in these information systems. First, based on a large dataset of Twitter messages from one geographic area, we develop a taxonomy of the trends present in the data. Second, we identify important dimensions according to which trends can be categorized, as well as the key distinguishing features of trends that can be derived from their associated messages. We quantitatively examine the computed features for different categories of trends, and establish that significant differences can be detected across categories. Our study advances the understanding of trends on Twitter and other social awareness streams, which will enable powerful applications and activities, including user-driven real-time information services for local communities. © 2011 Wiley Periodicals, Inc. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Hashtags are used in Twitter to classify messages, propagate ideas and also to promote specific topics and people. In this paper, we present a linguistic-inspired study of how these tags are created, used and disseminated by the members of information networks. We study the propagation of hashtags in Twitter grounded on models for the analysis of the spread of linguistic innovations in speech communities, that is, in groups of people whose members linguistically influence each other. Differently from traditional linguistic studies, though, we consider the evolution of terms in a live and rapidly evolving stream of content, which can be analyzed in its entirety. In our experimental results, using a large collection crawled from Twitter, we were able to identify some interesting aspects -- similar to those found in studies of (offline) speech -- that led us to believe that hashtags may effectively serve as models for characterizing the propagation of linguistic forms, including: (1) the existence of a "preferential attachment process", that makes the few most common terms ever more popular, and (2) the relationship between the length of a tag and its frequency of use. The understanding of formation patterns of successful hashtags in Twitter can be useful to increase the effectiveness of real-time streaming search algorithms. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Online social networking websites such as Twitter and Facebook often serve a breaking-news role for natural disasters: these websites are among the first ones to mention the news, and because they are visited by millions of users regularly the websites also help communicate the news to a large mass of people. In this paper, we examine how news about these disasters spreads on the social network. In addition to this, we also examine the countries of the Tweeting users. We examine Twitter logs from the 2010 Philippines typhoon, the 2011 Brazil flood and the 2011 Japan earthquake. We find that although news about the disaster may be initiated in multiple places in the social network, it quickly finds a core community that is interested in the disaster, and has little chance to escape the community via social network links alone. We also find evidence that the world at large expresses concern about such largescale disasters, and not just countries geographically proximate to the epicenter of the disaster. Our analysis has implications for the design of fund raising campaigns through social networking websites. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> We present a novel topic modelling-based methodology to track emerging events in microblogs such as Twitter. Our topic model has an in-built update mechanism based on time slices and implements a dynamic vocabulary. We first show that the method is robust in detecting events using a range of datasets with injected novel events, and then demonstrate its application in identifying trending topics in Twitter. <s> BIB011 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> As social media continue to grow, the zeitgeist of society is increasingly found not in the headlines of traditional media institutions, but in the activity of ordinary individuals. The identification of trending topics utilises social media (such as Twitter) to provide an overview of the topics and issues that are currently popular within the online community. In this paper, we outline methodologies of detecting and identifying trending topics from streaming data. Data from Twitter's streaming API was collected and put into documents of equal duration using data collection procedures that allow for analysis over multiple timespans, including those not currently associated with Twitter-identified trending topics. Term frequency-inverse document frequency analysis and relative normalised term frequency analysis were performed on the documents to identify the trending topics. Relative normalised term frequency analysis identified unigrams, bigrams, and trigrams as trending topics, while term frequency-inverse document frequency analysis identified unigrams as trending topics. Application of these methodologies to streaming data resulted in F-measures ranging from 0.1468 to 0.7508. <s> BIB012 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Microblog services have emerged as an essential way to strengthen the communications among individuals and organizations. These services promote timely and active discussions and comments towards products, markets as well as public events, and have attracted a lot of attentions from organizations. In particular, emerging topics are of immediate concerns to organizations since they signal current concerns of, and feedback by their users. Two challenges must be tackled for effective emerging topic detection. One is the problem of real-time relevant data collection and the other is the ability to model the emerging characteristics of detected topics and identify them before they become hot topics. To tackle these challenges, we first design a novel scheme to crawl the relevant messages related to the designated organization by monitoring multi-aspects of microblog content, including users, the evolving keywords and their temporal sequence. We then develop an incremental clustering framework to detect new topics, and employ a range of content and temporal features to help in promptly detecting hot emerging topics. Extensive evaluations on a representative real-world dataset based on Twitter data demonstrate that our scheme is able to characterize emerging topics well and detect them before they become hot topics. <s> BIB013 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Among the vast information available on the web, social media streams capture what people currently pay attention to and how they feel about certain topics. Awareness of such trending topics plays a crucial role in multimedia systems such as trend aware recommendation and automatic vocabulary selection for video concept detection systems. Correctly utilizing trending topics requires a better understanding of their various characteristics in different social media streams. To this end, we present the first comprehensive study across three major online and social media streams, Twitter, Google, and Wikipedia, covering thousands of trending topics during an observation period of an entire year. Our results indicate that depending on one's requirements one does not necessarily have to turn to Twitter for information about current events and that some media streams strongly emphasize content of specific categories. As our second key contribution, we further present a novel approach for the challenging task of forecasting the life cycle of trending topics in the very moment they emerge. Our fully automated approach is based on a nearest neighbor forecasting technique exploiting our assumption that semantically similar topics exhibit similar behavior. We demonstrate on a large-scale dataset of Wikipedia page view statistics that forecasts by the proposed approach are about 9-48k views closer to the actual viewing statistics compared to baseline methods and achieve a mean average percentage error of 45-19% for time periods of up to 14 days. <s> BIB014 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Twitter has become as much of a news media as a social network, and much research has turned to analyzing its content for tracking real-world events, from politics to sports and natural disasters. This paper describes the techniques we employed for the SNOW Data Challenge 2014, described in [Pap14]. We show that aggressive filtering of tweets based on length and structure, combined with hierarchical clustering of tweets and ranking of the resulting clusters, achieves encouraging results. We present empirical results and discussion for two different Twitter streams focusing on the US presidential elections in 2012 and the recent events about Ukraine, Syria and the Bitcoin, in February 2014. <s> BIB015
|
Trend discovery from digital media text has been a research problem of significant scientific interest for long, and is still of active interest BIB012 BIB004 [ ] BIB001 . Trend and topic propagation is one of the key factors that are associated with information diffusion on online social networks. Identifying topics and trends successfully will help in solving different practical problems. Natural disaster analysis and recovery is one such area, explored by ] BIB005 ] BIB010 . BIB006 ] empirically explore how Twitter can contribute to situational awareness, over two natural hazard events, namely Oklahoma Grassfires of April 2009 and Red River Floods of March and April 2009. Early identification of online discussion topics of customers, can help organizations better understand and grow their products and services, as well as control damage early BIB013 . Of late, one of the key areas within this research area has been the detection of topics and trends in microblogs such as Twitter, which are often associated with one topic or a few related topics. A number of research studies have been conducted, predominantly since 2010, that attempt to identify trends and topics and watch them evolve and spread in social networks. [ BIB007 present one of the early research works in detecting Twitter trends in real-time, and analyzing the lifecycle of the trends. They define bursty keywords as "keywords that suddenly appear in tweets at unusually high rates". Subsequently, they define a trend as a "set of bursty keywords frequently occurring together in tweets". Their system, TwitterMonitor, follows a two-step Twitter trend detection mechanism. It also has a third step for analyzing the detected trends. In the first step, they identify keywords suddenly appearing in tweets at unusually high rates, namely the bursty keywords. In order to identify bursty keywords effectively, they propose an algorithm named QueueBurst, based on queuing theory. The QueueBurst algorithm reads streaming data in one pass, and detects the bursty keywords in real time. It protects against spam and spurious bursts, where, by coincidence, a keyword appears in several tweets within a short time period. Subsequently, in the second step, they group the bursty keywords into trends, based upon cooccurrences of the keywords. They compute a set of bursty keywords K i at every time instant t, that can possibly be a part of a trend (or even the same trend). They periodically group keywords k ∈ K t into disjoint subsets K i t of K t , so that all keywords in the same subset are grouped under the same discussion topic. Given subsets K i t , a single subset k i can identify a trend. Thus, they identify trends as a group of bursty keywords that frequently occur together. Identifying more keywords related with a given trend using content extraction algorithms, identifying frequently cited news sources and adding such sources to the trend description, and exploiting geographical locality attributes of the origin of tweets contributing to the identified trends (such as ThanksGiving in Canada will make it likely that a large proportion of the tweets originate from Canada), they produce a chart illustrating the evolution of popularity of the trend during its lifecycle. [ BIB011 ] propose a methodology for online topic modeling, for tracking emerging events for Twitter, that considers a constant evolution of topics over time, and is amenable to dynamic changes in vocabulary. To this, they propose an online variant of the traditional LDA BIB002 ] method, which is enhanced by (P(z|w)), the "posterior distribution over assignments of words to topics", by . The online version of LDA they propose, processes the inputs and periodically updates their model. It produces topics comparable across different periods, that enables measuring topic shifts. Further, the size of topics does not grow with time. They summarize the traditional LDA along with the incorporation of Griffiths and Experiments with injecting novel events on-the-fly, and shows that the model is capable of detecting topics under such settings. BIB008 Creates taxonomy of geographical area-specific trends, based upon Twitter messages collected from the given areas. Identifies significant dimensions to enable trend categorization, and distinguishing features of trends. Empirically establishes the existence of significant differences in computed features for different trend categories. [ BIB015 Filters tweets based on the length and structure of the messages, removing noisy tweets and vocabulary. Combines with hierarchical tweet clustering, dynamic dendogram cutting and ranking of the clusters. Computes pairwise distance of tweets by normalizing the tweet term matrix and applying cosine similarity. Feeds the output into clustering. Selects the first tweet in each of the first 20 clusters as topic headlines. Re-clusters the headlines to avoid topic fragmentation. Shows that length and structure based aggressive filtering of tweets, combined with clustering the tweets hierarchically and ranking the resulting clusters, works well for detecting and labeling events. [ BIB003 ] Proposes a real-time emergent topic detection technique expressed by communities. Analyzes the authority of the content source using PageRank, and models term life cycles using an aging technique. Experiments with Twitter data of 2 days, and identifies the 5 top emergent terms at a given time slot for demonstrating an example of their model output. BIB009 Studies propagation and dynamic evolution of hashtags. Motivated by the concept of linguistic innovation that models language transformation, it defines hashtag innovation as a transformation of the hashtag. Observes that individuals seeking to assign a term not yet used for this purpose for categorizing their message, tend to create new hashtags. Observes the rich-gettingricher phenomena: a few hashtags tend to attract most of the attention. Models information flow over event clusters on social media. Identifies social discussion threads by identifying social and content-based connection across event clusters, and applying temporal filters on these clusters. Shows that topical discussions grow and evolve along social connections over time, rather than at random. BIB014 Uses historical time series data from multiple semantically similar topics to forecast the lifecycle of trending topics as they emerge. Uses nearest neighbor sequence matching, considering historical events that occurred with a similar time span. Studies Twitter, Google, and Wikipedia, three primary online social media streams, over thousands of topics and a year, to observe the emerging trends for empirically validating their process.
|
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> An interval-based temporal logic is introduced, together with a computationally effective reasoning algorithm based on constraint propagation. This system is notable in offering a delicate balance between <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Topic Detection and Tracking (TDT) is a research initiative that aims at techniques to organize news documents in terms of news events. We propose a method that incorporates simple semantics into TDT by splitting the term space into groups of terms that have the meaning of the same type. Such a group can be associated with an external ontology. This ontology is used to determine the similarity of two terms in the given group. We extract proper names, locations, temporal expressions and normal terms into distinct sub-vectors of the document representation. Measuring the similarity of two documents is conducted by comparing a pair of their corresponding sub-vectors at a time. We use a simple perceptron to optimize the relative emphasis of each semantic class in the tracking and detection decisions. The results suggest that the spatial and the temporal similarity measures need to be improved. Especially the vagueness of spatial and temporal terms needs to be addressed. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Most of the existing document and web search engines rely on keyword-based queries. To find matches, these queries are processed using retrieval algorithms that rely on word frequencies, topic recentness, document authority, and (in some cases) available ontologies. In this paper, we propose an innovative approach to exploring text collections using a novel keywords-by-concepts (KbC) graph, which supports navigation using domain-specific concepts as well as keywords that are characterizing the text corpus. The KbC graph is a weighted graph, created by tightly integrating keywords extracted from documents and concepts obtained from domain taxonomies. Documents in the corpus are associated to the nodes of the graph based on evidence supporting contextual relevance; thus, the KbC graph supports contextually informed access to these documents. In this paper, we also present CoSeNa (Context-based Search and Navigation) system that leverages the KbC model as the basis for document exploration and retrieval as well as contextually-informed media integration. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> With the recent rise in popularity and size of social media, there is a growing need for systems that can extract useful information from this amount of data. We address the problem of detecting new events from a stream of Twitter posts. To make event detection feasible on web-scale corpora, we present an algorithm based on locality-sensitive hashing which is able overcome the limitations of traditional approaches, while maintaining competitive results. In particular, a comparison with a state-of-the-art system on the first story detection task shows that we achieve over an order of magnitude speedup in processing time, while retaining comparable performance. Event detection experiments on a collection of 160 million Twitter posts show that celebrity deaths are the fastest spreading news on Twitter. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Twitter is a user-generated content system that allows its users to share short text messages, called tweets, for a variety of purposes, including daily conversations, URLs sharing and information news. Considering its world-wide distributed network of users of any age and social condition, it represents a low level news flashes portal that, in its impressive short response time, has the principal advantage. In this paper we recognize this primary role of Twitter and we propose a novel topic detection technique that permits to retrieve in real-time the most emergent topics expressed by the community. First, we extract the contents (set of terms) of the tweets and model the term life cycle according to a novel aging theory intended to mine the emerging ones. A term can be defined as emerging if it frequently occurs in the specified time interval and it was relatively rare in the past. Moreover, considering that the importance of a content also depends on its source, we analyze the social relationships in the network with the well-known Page Rank algorithm in order to determine the authority of the users. Finally, we leverage a navigable topic graph which connects the emerging terms with other semantically related keywords, allowing the detection of the emerging topics, under user-specified time constraints. We provide different case studies which show the validity of the proposed approach. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Streaming user-generated content in the form of blogs, microblogs, forums, and multimedia sharing sites, provides a rich source of data from which invaluable information and insights maybe gleaned. Given the vast volume of such social media data being continually generated, one of the challenges is to automatically tease apart the emerging topics of discussion from the constant background chatter. Such emerging topics can be identified by the appearance of multiple posts on a unique subject matter, which is distinct from previous online discourse. We address the problem of identifying emerging topics through the use of dictionary learning. We propose a two stage approach respectively based on detection and clustering of novel user-generated content. We derive a scalable approach by using the alternating directions method to solve the resulting optimization problems. Empirical results show that our proposed approach is more effective than several baselines in detecting emerging topics in traditional news story and newsgroup data. We also demonstrate the practical application to social media analysis, based on a study on streaming data from Twitter. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Twitter, Facebook, and other related systems that we call social awareness streams are rapidly changing the information and communication dynamics of our society. These systems, where hundreds of millions of users share short messages in real time, expose the aggregate interests and attention of global and local communities. In particular, emerging temporal trends in these systems, especially those related to a single geographic area, are a significant and revealing source of information for, and about, a local community. This study makes two essential contributions for interpreting emerging temporal trends in these information systems. First, based on a large dataset of Twitter messages from one geographic area, we develop a taxonomy of the trends present in the data. Second, we identify important dimensions according to which trends can be categorized, as well as the key distinguishing features of trends that can be derived from their associated messages. We quantitatively examine the computed features for different categories of trends, and establish that significant differences can be detected across categories. Our study advances the understanding of trends on Twitter and other social awareness streams, which will enable powerful applications and activities, including user-driven real-time information services for local communities. © 2011 Wiley Periodicals, Inc. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Hashtags are used in Twitter to classify messages, propagate ideas and also to promote specific topics and people. In this paper, we present a linguistic-inspired study of how these tags are created, used and disseminated by the members of information networks. We study the propagation of hashtags in Twitter grounded on models for the analysis of the spread of linguistic innovations in speech communities, that is, in groups of people whose members linguistically influence each other. Differently from traditional linguistic studies, though, we consider the evolution of terms in a live and rapidly evolving stream of content, which can be analyzed in its entirety. In our experimental results, using a large collection crawled from Twitter, we were able to identify some interesting aspects -- similar to those found in studies of (offline) speech -- that led us to believe that hashtags may effectively serve as models for characterizing the propagation of linguistic forms, including: (1) the existence of a "preferential attachment process", that makes the few most common terms ever more popular, and (2) the relationship between the length of a tag and its frequency of use. The understanding of formation patterns of successful hashtags in Twitter can be useful to increase the effectiveness of real-time streaming search algorithms. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Among the vast information available on the web, social media streams capture what people currently pay attention to and how they feel about certain topics. Awareness of such trending topics plays a crucial role in multimedia systems such as trend aware recommendation and automatic vocabulary selection for video concept detection systems. Correctly utilizing trending topics requires a better understanding of their various characteristics in different social media streams. To this end, we present the first comprehensive study across three major online and social media streams, Twitter, Google, and Wikipedia, covering thousands of trending topics during an observation period of an entire year. Our results indicate that depending on one's requirements one does not necessarily have to turn to Twitter for information about current events and that some media streams strongly emphasize content of specific categories. As our second key contribution, we further present a novel approach for the challenging task of forecasting the life cycle of trending topics in the very moment they emerge. Our fully automated approach is based on a nearest neighbor forecasting technique exploiting our assumption that semantically similar topics exhibit similar behavior. We demonstrate on a large-scale dataset of Wikipedia page view statistics that forecasts by the proposed approach are about 9-48k views closer to the actual viewing statistics compared to baseline methods and achieve a mean average percentage error of 45-19% for time periods of up to 14 days. <s> BIB011 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Twitter has become as much of a news media as a social network, and much research has turned to analyzing its content for tracking real-world events, from politics to sports and natural disasters. This paper describes the techniques we employed for the SNOW Data Challenge 2014, described in [Pap14]. We show that aggressive filtering of tweets based on length and structure, combined with hierarchical clustering of tweets and ranking of the resulting clusters, achieves encouraging results. We present empirical results and discussion for two different Twitter streams focusing on the US presidential elections in 2012 and the recent events about Ukraine, Syria and the Bitcoin, in February 2014. <s> BIB012
|
Here n(d,t) and n(t, w) respectively denote the assignment counts of topic t in document d and of word w to topic t, excluding the current assignment z. To transform method into an online (streamed) one, they propose a model that can process the input and update itself periodically. They use time-slices k t , and a "sliding window L that retains documents for a given number of previous time slices". As time slice k t+1 arrives, they "resample topic assignments z for all documents in window L" to update the model, using the θ and φ values from the earlier model in time slice k t for serving as "Dirichlet priors α ′ and β ′ in the evolved model in time slice k t+1 ". They introduce c (0 ≤ c ≤ 1), a contribution factor, to "enable their model to have a set of constantly evolving topics", where c = 0 indicates that the model is run without any parameter learned previously. The time window ensures that their topic model remains sensitive to topic changes with time. To accommodate dynamic vocabulary, they remove words falling below a frequency threshold and add new words satisfying the threshold, along time slices. For previously seen documents and words, the "Dirichlet priors α ′ and β ′ in the new model in time slice k t+1 " are given by: For new documents and words, it is calculated as α ′ dt = α 0 and β ′ tw = β 0 . Here α ′ dt and β ′ tw are the priors for topic t in document d and word w in topic t respectively, n(d,t) and n(t, w) are the number of assignments in the earlier model of time slice k t , and "D old , N old and W new are respectively the number of documents previously processed, number of tokens in those documents and vocabulary size, in time window L". They normalize to enable maintaining a "constant sum of priors across different processing batches", i.e., ∑ α ′ = ∑ α = D × T × α 0 and ∑ β ′ = ∑ β = T ×W × β 0 . For tracking events that are emerging, they measure the shifts (degree of change) in the topic model (evolution of topic) "between the word distribution of each topic t before and after an update", using Jensen-Shannon (JS) Divergence. If the shift exceeds a threshold, they classify a topic as novel. They demonstrate their model using synthetic datasets on Twitter, by mixing real-life Twitter data stream (not annotated) and TREC Topic Detection and Tracking (TDT) corpus (annotated) data. For experiments, they collect data using Twitter's streaming API from September 2011 to January 2012, having 12 million tweets spanning over 1.39 users. They also apply their model to "a series of Twitter feeds, to detect topics popular in specific locations". For experiments, the length of a time slice and window size and respectively set to 1 day and 2 days. They find the detected popular topics to closely follow local and global news events. They observe that, topics expressed as multinomial distributions over terms, are more descriptive compared to strings or single hashtags. Thus, they show that their model is capable of detecting emerging topics under such settings. BIB006 ] create a locality-sensitive hashing technique, to detect new events from a stream of posts in Twitter. Their approach is empirically shown to be an order of magnitude faster compared to the state-of-the-art, while retaining performance. BIB008 ] use dictionary learning to detect emerging topics on Twitter. They use a two-stage approach to detect and cluster new content generated by users. They apply their system on streaming data, showing the effectiveness of their approach. ] use the approach of BIB006 ], but filter using Wikipedia, reducing the number of spurious topics that often get detected by the topic detection systems. They empirically show that events within Wikipedia tend to lag behind Twitter. BIB009 ] characterize emerging trends on Twitter. They develop a taxonomy of geographical area-specific trends, based upon Twitter messages collected from the given geographic areas. They denote the Twitter-given trends as T tw . They collect Twitter's local trending terms. They identify the highest trending terms using a message and term frequency (t f ) pair, such that the message set contains at least 100 messages. They identify bursts via terms that appear more frequently than expected in a given message set, within a given time period. They score a term by subtracting the expected number of occurrences of all terms, from the occurrence count of the term. They retain each term that would score in the top 30 for a given day in a given week, for a sufficiently large number of hours. They assemble the scores to assign a score to each bursty trend comprising of a set of such terms. They add these terms to T tw , and pick the top 1, 500 trends to form T t f . The authors run qualitative and quantitative analyses for a selective (random) subset of T tw and T t f , as they observe that computing on the whole would be prohibitively expensive. They select trends that: (a) reflect the trend diversity present in the source sets, and (b) are human-interpretable, inspecting the Twitter messages associated. They take a set union of the trends selected, denoted as T , and split into two subsets, T Qual and T Quant , to perform qualitative and quantitative analysis respectively. They associate tweet messages with trends by aligning the messages with trend peak times and the surrounding 72 hours before and after. They observe M t = 1350 in T quant , that is, 1, 350 tweet messages on an average are associated with each trend t. They broadly classify trends into two: exogenous trends that capture activities, interests and event originating outside Twitter, and endogenous trends that capture Twitter-only events that do not exist outside Twitter. Exogenous trends comprise of global news events, broadcast media events, national holidays, memorable days and local participation-based (physical) events, while endogenous trends comprise of retweets, memes and activities of fan communities. To characterize the two types of trends, they derive different types of features. This includes 7 content features based upon the content of messages in M t , 3 interaction features based upon the @-username interactions amongst users, 4 time-based features that vary across trends and capture the temporal patterns of information spread, 3 participation features based upon authorship of messages associated with given trends, and 7 social network features that are built upon the followers of each message, for messages belonging to M t . They empirically establish the existence of significant differences in the set of features for different categories of trends. They show that exogenous trends have higher URL proportions, smaller hashtag proportions, fewer retweets, fewer social connections between authors and different (temporal) head periods compared to endogenous trends. They show that breaking news has more retweets (forwards), lesser replies (conversations) and more rapid temporal growth compared to other exogenous trends, as well as different social network features. They notice local events to have denser social networks, higher connectivity, more social reciprocity, and more replies, compared to other exogenous trends. They further noticed memes to have higher connectivity and more reciprocity compared to retweet trends, for endogenous events. [ BIB012 One, they conduct aggressive filtering of tweets and terms, in order to remove tweets containing noise and to respect vocabulary. They normalize tweet text and remove user mentions, URLs, digits, hashtags and punctuations. They tokenize by whitespaces, remove stopwords, and append hashtags, user mentions and de-noised text tokens. From the tweets thus obtained, they remove the tweets with (a) more than 2 user mentions, or (b) more than 2 hashtags, or (c) less than 4 tokens. The intuition is to eliminate tweets with too many user mentions or hashtags, but too little clean information content (text). Effectively, this acts as noise elimination. For vocabulary filtering, they remove user mentions, and retain bi-grams and tri-grams that are present in at least a threshold (10) number of tweets. They subsequently retain tweets with at least 5 words that are in vocabulary, in order to retain tweets that can be meaningfully clustered and eliminate tweets with little vocabulary. Two, they combine this with hierarchical tweet clustering, dynamic dendogram cutting and ranking of the clusters. They compute pairwise distance of tweets by normalizing the tweet term matrix and applying cosine similarity. They perform topic-based clustering of tweets on the output using the distance thus obtained. They cut the resulting dendogram empirically fixing at 0.5, avoiding too tight or too loose clusters and topic fragmentation. They rank the resulting clusters. They observe that ranking the clusters by size, and labeling these clusters as trending topics, does not yield good results, as the topics are casual and repetitive, and by inspection appear unlikely to make news headlines. As an alternative approach, they use the d f − id f t formula of , that approximates the current window term frequency by the average term frequency of the past t time windows, as For experiments, they set the history size t = 4. They assign a high weight to the id f t term to recognized named entities, as they observe such assignments tend to retrieve more news-like topics. They select the first tweet of each of the first 20 ranked clusters as the headline of the topics detected. They re-cluster the headlines to avoid topic fragmentation. They finally present the raw tweet content of the headline (without URLs) with the earliest publication time, as the final topic headline. BIB007 propose a real-time emergent topic detection technique expressed by communities. They define a term as a topic. They define a topic as emerging if it had not occurred rarely in the past but frequently in a specified time interval. They extract the tweet content in form of term vectors with relative frequencies. For this, they associate a tweet vector − → tw j to each tweet tw j to express all the knowledge expressed by a tweet, where each of the vector components represents a weighted term extracted from − → tw j . They retain all keywords, and attempt to highlight keywords are potentially of high relevance for a topic, but appear less frequently. Tweet vector − → tw j is defined as − → tw j = {w j,1 , w j,2 , ..., w j,v , }, where K t is the corpus vocabulary in time interval I t , the vocabulary size is v = |K t | and the x th term of vocabulary of the j th post has a weight w j,x . Based on the social relationships of active users (content authors), they define a directed graph and compute their authority using PageRank BIB002 ]. For each topic (term), they model the topic lifecycle using an aging technique, leveraging the authority of users, thereby studying its usage in a specific interval of time. Each tweet provides nutrition to the contained words, depending upon the authority of the user who made the tweet. Using keyword k ∈ K t and the tweet set TW t k ∈ TW t having term k at the time interval I t , the amount of nutrition is defined as Here w k, j denotes the weight of the term k in tweet vector − → tw j , the function user(tw j ) gives the author u for tweet tw j , and the authority score of user u is auth(u). Thus, they evaluate term usage frequency to quantify term usage behavior, and analyze author influence to qualify term relevance. They formulate an age-dependent energy of a keyword using the nutrition difference across pairs of time intervals. They define a term as hot if the term is used extensively within a given time interval, and emergent if it is hot in the current interval of time but never hot earlier. Clearly, a keyword that has been hot over more than one time interval, then it will not be identified as emergent after the first temporal interval. They limit the number of previous time slots to consider using a threshold. They propose two techniques for selecting an emerging term set within a given time interval -a supervised technique and an unsupervised one. They use a notion of critical drop BIB005 ] to identify emergent topics, and proceed to label topics using a minimal set of keywords. Critical drop is obtained as: In a supervised setting that lets the user choose a permissible threshold for drop, they define EK t , the set of emerging keywords, as: In an unsupervised model, they automatically set the value of this drop dynamically, by computing the average drop over successive entries for the keywords ranking higher than the maximum drop point detected, and marking the first higher-than-average drop as the critical drop. They define topic as a "minimal set of a terms, related semantically to an emerging keyword". Emerging terms are mapped to emerging topics, by studying the semantic relationships amongst the keywords in K t extracted within interval I t , using co-occurrence information. They associate a correlation vector − → cv t k , defining the relationships of k with all the other keywords in the interval I t , in form of a weighted term set. They create topic graph T G t using the correlation vectors, as a directed and weighted graph, where the nodes are labeled with the keywords. Using a weight-based adaptive cut-off, they retain only the edges representing the strongest relationships, and discard the rest. They detect emerging topics using the topological structure of T G t . For this, they discover the strongly connected components that are rooted on the emerging keyword set EK t in T G t . They define subgraph ET t z (K z , E z , ρ) as the emerging topic related to each emerging keyword z ∈ EK t . This subgraph comprises a set of keywords, that are related to z semantically, in time interval I t . ρ k,z represents "the relative weight of the keyword k in the corresponding vector − → cv t k " -the "role of keyword z in context of keyword k". Here, the set of keywords K t z that belong to ET t z , the emerging topic, is obtained by "considering as starting point in T G t for the emerging keyword z, but also contains a set of common terms semantically related to z that are not necessarily included in EK t ". Thus they have some keywords indirectly correlated with the emerging keywords. They rank the topics, in order to identify which topic is more emergent in the interval, as Finally, they perform unsupervised keyword ranking, to choose the most representative keywords for each cluster. They experiment with Twitter data of 2 days, and identify the 5 top emergent terms at a given time slot for demonstrating an example of their model output. BIB010 ] study dynamic evolution of Twitter hashtags. Specifically, they investigate the creation, use and dissemination of hashtags by the members of information networks of Twitter hashtags. They study hashtag propagation in social groups where members are known to influence each other linguistically. They take a live and rapidly evolving content stream, and analyze the evolution of terms (hashtags). They collect Twitter data of 55 million users, leading to 2 billion followership edges, out of which they find 1.7 billion to be usable. They compare "features of the variation of hashtags to linguistic variation". They collect data from interchangeable hashtags that refer to the same event or topic, and would have been considered to be the same in a more controlled setting than Twitter. For instance, #michaeljackson, #mj and #jackson are hashtags referring to the same topic (subject). They select topics, and form bases by filtering tweets such that a chosen tweet will have at least one hashtag, and at least one term that is well-known to be related to the topic (such as, jackson when referring to Michael Jackson). Motivated by the concept of linguistic innovation ] that models transformation of any language attribute such as phonetics, phonology, syntax, semantics, etc., the authors define hashtag innovation as a transformation of the hashtag. They observe that individuals seeking to assign a term not yet used for this purpose for categorizing their message, tend to create new hashtags; such as, to tag (name) an action or object that they are unfamiliar with in the physical (offline) world. They observe the presence of the rich-get-richer phenomenon ]: a few hashtags tend to attract most of the attention, with only around 10% of the hashtags getting used more than 10 times, and as many as 60% of the hashtags getting used only once. They observe that hashtags that gain the maximum popularity tend to be direct, short in length and simple, while many of the less popular hashtags are formed by long character strings. They also clearly observe that the difference in lengths of the top few popular tags are irrelevant. However, they compare between the more popular and less popular hashtags and conclude that the number of characters in a given hashtag, a linguistic (and internal) feature, determines the success/failure of the hashtag on Twitter. model information flow over topics on social media, using empirical evidence found from natural disaster and political event datasets of Twitter. They introduce the notion of social discussion threads by creating event clusters on Twitter data, connecting across these clusters based upon contemporary external news sources about the events under consideration, and examining the social and temporal relationships across cluster pairs. They identify conversations by exploring social, semantic and temporal relationships of these clusters. Their model also looks at temporal evolution of the topics as they evolve in the social network, over discussions. They represent an event as where K i is the keyword set extracted from the tweets belonging to event E i , and T i is the event time period. K contains the proper nouns (extracted using PoS tagging) and id f vector from the tweets. Thus, each event becomes a cluster of tweet messages. They define extended semantic relationships across event cluster pairs, connecting the pairs with information obtained from contemporary external document corpus such as Google News. They generate |K i | × |K j | keyword pairs that need to be evaluated for extended semantic relationship, pruning semantically related pairs such as synonyms, antonyms, hypernyms and hyponyms in order to avoid skewed results. They use the Wordnet lexical database to compute similarity of keyword pairs, and retain keyword pairs with sufficient similarities. They find contemporary external documents in which both the keywords occur. They compute a document pair coupling score, such that, "if C(K i l , D t ) is the tf-idf score of word K i in document D t , the pairwise coupling score is given They calculate the coupling score of a pair of keywords as the average coupling score across all documents. Extending this to all keyword pairs for a given event cluster pair E i and E j , if w i j keyword pairs were retained and the rest were pruned, they compute the overall score of connection of the event pair as In their setting, a person P belongs to an event cluster E i , iff P posts a message M, such that M ∈ E i . This allows a person to belong to multiple clusters simultaneously. An edge is created between clusters E i and E j , if person P i ∈ E i , P j ∈ E j , and (P i , P j ) is a social followership edge in the input Twitter graph. If E i and E j have P i and P j memberships respectively, the average neighbor count in E j (E i ) of an individual in E i (E j ) is a i j (a ji ), then the edge (E i , E j ) has a strength of P i .a i j + P j .a ji . They create two kinds of temporal relationships across event cluster pairs, drawing from Allen's temporal relationship list BIB001 ]. They create a "temporal edge from event E i to event E j , if E j starts within a threshold time gap after E i ends", and set this gap to 2 days for experiments, and label as follows. This is effectively the set union of Allen's meet and disjoint relationships. They also create the temporal overlap relationship of Allen across cluster pairs. They propose a two-step process for identifying social discussion threads that evolve topically. First, they construct the semantic AND temporal graph by taking edge set intersection of event cluster pairs, considering direction, to form discussion sequences. Next, they construct the semantic AND temporal AND social graph, by also intersecting the social edges. This retains the socially connected discussion sequences, and discards the others, thereby identifying social discussion threads. They extract modularity-based communities from the discussion sequences as well as the social discussion threads, and find the normalized mutual information (NMI) ] of the two. Over multiple datasets, they show that this NMI value is significantly higher, compared to the NMI value across the communities found in the input social and semantic graphs. They claim this as evidence of topical discussions growing and evolving along social connections over time, rather than at random, even for events of large scale where randomness of user participation and discussion is likely. They also qualitatively show that discussion threads tend to localize in social communities. BIB011 propose an approach to forecast the life cycle of trending topics as they emerge. They observe popular terms from 10 different sources, including 5 Google channels, 3 Twitter channels and 2 Wikipedia channels. Retrieving 10-20 feeds per day (total 110 topics per day), they observe over thousands of topics and a period of a year. They unify the trends found across different sources using edit distance clustering. They rank each trending topic (cluster) by assigning a global trend score as the sum of daily trend scores. They define lifetime of a trend as "the number of consecutive days with positive trend scores". Doing lifetime analysis of trends, they investigate the survival duration of trends, its variation across different media channels. They find trends to last typically less than 14 days. They observe that Twitter trends to be the shortest, and Wikipedia trends also to be short. They observe Google to cover a significant proportion of the major trends, and thus Google dominates the lifetime histogram of the topics that trend. They observe that certain categories of topics go well with certain channels. For instance, sports is the most popular on Google, while holidays, celebrities and entertainment are most popular on Twitter. Using historical time series data from multiple semantically similar topics, they forecast which of the emerging topics will trend. This comprises of three steps. First, they discover semantically similar topics. They use DBPedia BIB004 ] named entities and category information. They create a topic set that includes all discovered similar topics. To find similar topics, they define two topic sets: one including the trending topic, and another containing various general topics (to compare with the trending topic). Second, they do a nearest neighbor sequence matching, on timeseries of topics of interest, using "the viewing statistics of the two previous months, to all partial sequences of same length of similar topics in the set of topics". Third, they forecast the life cycle of trending topics. Their forecast draws from the best matching semantically similar topic. It uses the semantic similarity score to "scale to adjust to the nearest neighbor time series". [ BIB003 propose incorporating simple semantics into topic detection for documents, by grouping the terms based upon similar meanings. They associate the group with external ontology, and extract terms and entities into distinct sub-vectors to represent the document. Similarity of a given pair of documents are computed using sub-vector similarity. predict topics that would draw attention in future. They use moving average convergence divergence (MCAD), an indicator frequently used to study stock prices, to identify emerging topics, using a short-period and long-period trend momentum oscillator, and average of term frequency. They predict that a term will trend positively if a trend with a negative momentum changes to positive, and will trend negatively if a trend with a positive momentum changes to negative.
|
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize... <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> Today, when searching for information on the WWW, one usually performs a query through a term-based search engine. These engines return, as the query's result, a list of Web pages whose contents matches the query. For broad-topic queries, such searches often result in a huge set of retrieved documents, many of which are irrelevant to the user. However, much information is contained in the link-structure of the WWW. Information such as which pages are linked to others can be used to augment search algorithms. In this context, Jon Kleinberg introduced the notion of two distinct types of Web pages: hubs and authorities . Kleinberg argued that hubs and authorities exhibit a mutually reinforcing relationship : a good hub will point to many authorities, and a good authority will be pointed at by many hubs. In light of this, he dervised an algoirthm aimed at finding authoritative pages. We present SALSA, a new stochastic approach for link-structure analysis, which examines random walks on graphs derived from the link-structure. We show that both SALSA and Kleinberg's Mutual Reinforcement approach employ the same metaalgorithm. We then prove that SALSA is quivalent to a weighted in degree analysis of the link-sturcutre of WWW subgraphs, making it computationally more efficient than the Mutual reinforcement approach. We compare that results of applying SALSA to the results derived through Kleinberg's approach. These comparisions reveal a topological Phenomenon called the TKC effect which, in certain cases, prevents the Mutual reinforcement approach from identifying meaningful authorities. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> Node characteristics and behaviors are often correlated with the structure of social networks over time. While evidence of this type of assortative mixing and temporal clustering of behaviors among linked nodes is used to support claims of peer influence and social contagion in networks, homophily may also explain such evidence. Here we develop a dynamic matched sample estimation framework to distinguish influence and homophily effects in dynamic networks, and we apply this framework to a global instant messaging network of 27.4 million users, using data on the day-by-day adoption of a mobile service application and users' longitudinal behavioral, demographic, and geographic data. We find that previous methods overestimate peer influence in product adoption decisions in this network by 300–700%, and that homophily explains >50% of the perceived behavioral contagion. These findings and methods are essential to both our understanding of the mechanisms that drive contagions in networks and our knowledge of how to propagate or combat them in domains as diverse as epidemiology, marketing, development economics, and public health. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> The growing demand for large-scale data mining and data analysis applications has led both industry and academia to design new types of highly scalable data-intensive computing platforms. MapReduce and Dryad are two popular platforms in which the dataflow takes the form of a directed acyclic graph of operators. These platforms lack built-in support for iterative programs, which arise naturally in many applications including data mining, web ranking, graph analysis, model fitting, and so on. This paper presents HaLoop, a modified version of the Hadoop MapReduce framework that is designed to serve these applications. HaLoop not only extends MapReduce with programming support for iterative applications, it also dramatically improves their efficiency by making the task scheduler loop-aware and by adding various caching mechanisms. We evaluated HaLoop on real queries and real datasets. Compared with Hadoop, on average, HaLoop reduces query runtimes by 1.85, and shuffles only 4% of the data between mappers and reducers. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> In recent years, research on measuring trajectory similarity has attracted a lot of attentions. Most of similarities are defined based on the geographic features of mobile users' trajectories. However, trajectories geographically close may not necessarily be similar because the activities implied by nearby landmarks they pass through may be different. In this paper, we argue that a better similarity measurement should have taken into account the semantics of trajectories. In this paper, we propose a novel approach for recommending potential friends based on users' semantic trajectories for location-based social networks. The core of our proposal is a novel trajectory similarity measurement, namely, Maximal Semantic Trajectory Pattern Similarity (MSTP-Similarity), which measures the semantic similarity between trajectories. Accordingly, we propose a user similarity measurement based on MSTP-Similarity of user trajectories and use it as the basis for recommending potential friends to a user. Through experimental evaluation, the proposed friend recommendation approach is shown to deliver excellent performance. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> Twitter enjoys enormous popularity as a micro-blogging service largely due to its simplicity. On the downside, there is little organization to the Twitterverse and making sense of the stream of messages passing through the system has become a significant challenge for everyone involved. As a solution, Twitter users have adopted the convention of adding a hash at the beginning of a word to turn it into a hashtag. Hashtags have become the means in Twitter to create threads of conversation and to build communities around particular interests. ::: ::: In this paper, we take a first look at whether hashtags behave as strong identifiers, and thus whether they could serve as identifiers for the Semantic Web. We introduce some metrics that can help identify hashtags that show the desirable characteristics of strong identifiers. We look at the various ways in which hashtags are used, and show through evaluation that our metrics can be applied to detect hashtags that represent real world entities. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> We show that information about social relationships can be used to improve user-level sentiment analysis. The main motivation behind our approach is that users that are somehow "connected" may be more likely to hold similar opinions; therefore, relationship information can complement what we can extract about a user's viewpoints from their utterances. Employing Twitter as a source for our experimental data, and working within a semi-supervised framework, we propose models that are induced either from the Twitter follower/followee network or from the network in Twitter formed by users referring to each other using "@" mentions. Our transductive learning results reveal that incorporating social-network information can indeed lead to statistically significant sentiment-classification improvements over the performance of an approach based on Support Vector Machines having access only to textual features. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> Social networks and the propagation of content within social networks have received an extensive attention during the past few years. Social network content propagation is believed to depend on the similarity of users as well as on the existence of friends in the social network. Our former investigation of the YouTube social network showed that strangers (non-friends and non-followers) play a more important role in content propagation than friends. In this paper, we analyze user communities within the YouTube social network and apply various similarity measures on them. We investigate the degree of similarity in communities versus the entire social network. We found that communities are formed from similar users. At the same time, we found that there are no large similarity values between friends in YouTube communities. <s> BIB011 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> Community detection in social networks is a well-studied problem. A community in social network is commonly defined as a group of people whose interactions within the group are more than outside the group. It is believed that people's behavior can be linked to the behavior of their social neighborhood. While shared characteristics of communities have been used to validate the communities found, to the best of authors' knowledge, it is not demonstrated in the literature that communities found using social interaction data are like-minded, i.e., they behave similarly in terms of their interest in items (e.g., movie, products). In this paper, we experimentally demonstrate, on a social networking movie rating dataset, that people who are interested in an item are socially better connected than the overall graph. Motivated by this fact, we propose a method for finding communities wherein like-mindedness is an explicit objective. We find small tight groups with many shared interests using a frequent item set mining approach and use these as building blocks for the core of these like-minded communities. We show that these communities have higher similarity in their interests compared to communities found using only the interaction information. We also compare our method against a baseline where the weight of edges are defined based on similarity in interests between nodes and show that our approach achieves far higher level of like-mindedness amongst the communities compared to this baseline as well. <s> BIB012
|
Bringing the aspects of familiarity and similarity together, finding the impact of one on the other, and correlating the two for information modeling, have drawn research interest. Research questions that require study of familiarity and similarity of users of online social networks have been asked, such as whether topics of interest are more similar among users with following relations that without, and whether recommending a user to make a social connection with another user based upon similarity is effective. In Twitter, homophily BIB003 implies that a "user follows a friend if she is interested in one or more topics posted by the friend, and the friend follows her back because she finds that they share similar topical interest(s)". Researchers have investigated homophily for information diffusion and community analysis. Investigates the presence and causes of reciprocity in Twitter followership network, and impact of this reciprocity. Shows that Twitter users with reciprocal followerships are topic-wise more similar, compared to those without. Shows that Twitter followerships are more interest-based than casual. Proposes SALSA, a user-recommendation stochastic algorithm for a user to follow other users, based upon user-expressed interest and the set of people followed. This lets their system recommend other users to a given user. Observes that users who are similar often follow one another, and users often follow other users that in turn follow similar other users. In one of the earliest works, attempt to bring social familiarity and similarity together in social network and microblog settings. They collect data for 996 top Twitter users from Singapore in terms of number of followers, as per twitterholic.com. They crawl the followers and friends (those being followed) of each of these users s ∈ S, and store them in the set S. They finalize their set of target users for the experiment as S ′ = S ∪(S). Thereby, they obtain S * = {s|s ∈ S ′ , and s is from Singapore }. In their data, |S * | = 6748. They represent the set of all tweets by all members of S * by T , where |T | = 1, 021, 039 for their dataset. They observe that, except for a few outliers, the number of tweets made the the users, the number of followers and the number of friends (those being followed), follow the power law distribution. They observe that the Twitter platform is rich in the reciprocity property: in spite of an edge (followership) being a oneway relationship, "72.4% of Twitter users follow back more than 80% of their followers, and 80.5% of the users have 80% of users they follow, following them back". To determine the presence of homophily on Twitter, they ask whether topics of interest are more similar among users with following (and reciprocal following) relationships compared to those without. To answer, they attempt to find topic interests of Twitter users, since topics are not explicitly specified on Twitter, and hashtags are not present in all messages. They collect all tweets made by a user, and create a user-level document, and repeat this for each user. They run LDA BIB005 ] for topic detection. In the LDA process, they create DT , a D × T matrix, where D and T respectively denote the count of users and topics. DT i j represents the "number of times a word in user s j 's tweets is assigned to topic t j ". They measure topical difference between a pair of users s i and s j as The JS Divergence D JS (i, j) between probability distributions DT ′ i and DT ′ j is calculated as Here "M is the average of the two probability distributions, and D KL is the KL divergence of the two". Using the notion of topical difference, they perform statistical hypothesis testing and find in answer to their question, that, users with following (and reciprocal following) are more similar in terms of topics of interest, than those without. They attempt to measure topic-sensitive influence of Twitter users, by proposing a PageRanklike BIB001 ] framework, and call it topic-specific TwitterRank. They consider the directed graph, where edges are directed from followers to friends (persons followed). They perform a topicspecific random walk, and construct a topic-specific relationship network among Twitter users. For a topic t, the random surfing transition probability, from follower s i to friend s j , is defined as Here s j has published |T | number of tweets, and ∑ a:s i follows s a |T a | is the total number of tweets published by all the friends of s i . The similarity between s i and s j in topic t can be found as jt | This definition captures two notions. (a) It assigns a higher transition probability to friends who publish content more frequently. (b) The influence is also based upon topical similarity of s i and s j , capturing the homophily phenomenon. They introduce measures to account for pairs of users that follow only each other and nobody else. For this, they use a teleportation vector E t , which captures the probability of a random walk jumping to some users rather than following the graph edges all the time. They calculate topic-specific TwitterRank − − → T R t of users, in topic t, iteratively as − − → T R t = γP t × − − → T R t + (1 − γ)E t , where P t is the transition probability and γ (0 ≤ γ ≤ 1) controls the teleportation probability. The TwitterRank vectors thus constructed are topic-specific They capture the influence of users for each topic, and aggregate to compute the overall influence of users, as − → Here topic t is given weight r t , and the corresponding − − → T R t . Weight assignments differ across different settings, to compute user influence under such settings. Their study reveals that the high reciprocity in Twitter can be explained by homophily. This empirically shows that Twitter followerships are more interest-based than casual. observe that on Twitter, a user tends to follow those who are followed by other similar users. Thus, the followers of a user tend to be similar to each other. They claim that user similarity is likely to lead to followership (familiarity). Motivated by this, they deploy a few user recommendation algorithms (a user recommended to another user for followership) in Twitter's live production system. One algorithm is based upon user's circle of trust, derived from an egocentric random walk similar to personalized PageRank BIB007 ] . The random walk parameters include the count of steps, reset probability (optionally discarding low-probability vertices), control parameters used to sample outgoing edges for high outdegree vertices etc. They dynamically adjust the random walk and personalization parameters for specific applications. They deploy another algorithm based upon SALSA (Stochastic Approach for Link-Structure Analysis) BIB004 , a random walk algorithm like PageRank BIB001 ] and HITS BIB002 . SALSA is applied on a hub-authority bipartite graph such that it traverses a pair of links at each step, one forward and one backward link. This ensures that the random walk ends up on the same side of the bipartite graph every time. For each user, the hub comprises of a set of users that a given user trusts, and the authority comprises of a set of uses that hubs follow. They run SALSA for multiple iterations and assigns scores to both the sides of the bipartite graph. On one side of the bipartite graph, they obtain a interested in kind of rank of the vertices. On the other side, they obtain user similarity measures. This lets their system recommend other users to a given user, using a rank of similarity of users that are thus reached in the random walk process, where the ranks are computed based upon expressed interest, and the set of people followed. They evaluate on Twitter using offline experiments on retrospective data, as well as A/B split testing on live data, and find SALSA the most effective among the different follower recommendation algorithms for Twitter. Among other studies that involve social familiarity and similarity, BIB008 ] model social network user similarity using trajectory mining. BIB011 analyze YouTube social network user communities and apply several measures of similarity on the communities. Some of the similarity computation methods they apply include Jaccard ] and Dice ] similarity co-efficient, Sokal and Sneath similarity measure , Russel and Rao similarity measure , Roger and Tanimoto similarity measure and L 1 and L 2 norms . They observe that communities are formed from similar users on Youtube; however, they do not find the friends in YouTube communities to be largely similar. BIB012 attempt to find like-minded communities on a movie review platform that also has a social network friendship platform inbuilt. They define like-mindedness as a measure to capture the compatible interest levels among community members, as cosine similarity of ratings the members assign to different movies. They find communities with the objective being likemindedness. Using frequent itemset mining, they find tight small groups with multiple shared interests, that act as core building blocks of like-minded communities. Comparing with communities discovered using only interaction information, they show these communities to have higher similarity of interests. [ BIB006 ] attempt to distinguish between influence-based diffusion and homophily-driven contagion in product-adoption decisions, on dynamic networks. They investigate the diffusion of a mobile service product for 5 months after launch, on the Yahoo instant messenger (IM) network, a social network that comprised of 27.4 million users at the time of experimentation. They use a dynamic match framework for sample estimation that they develop to differentiate influence and homophily effects in a dynamic network setting. Their findings indicate that "homophily explains more than 50% of perceived behavioral contagion". While this study is not a direct investigation of impact of familiarity on similarity or vice-versa, it is one of the early works on social networks that show the significance of similarity (homophily) on a social network, and contrast this with the impact of peer influence. ] consider similarity and social familiarity together, investigating the impact of homophily on information diffusion, as outlined in Section 3. Many research works exist that address similarity and familiarity independently. Different kinds of similarities between users have been studied on social networks and microblogs, like Facebook and Twitter. Early studies attempted to measure tag-based similarity of users. For instance, BIB009 measure user similarity based upon Twitter hashtags. Topic-based similarity of users refines the notion of tag-based similarity of microblog users. propose to train topic models using two different methodologies: LDA BIB005 ] and authortopic model . They subsequently infer topic mixture θ both for corpus and messages. They "classify users and associated messages into topical categories", to empirically demonstrate their system on Twitter. They use JS divergence to measure similarity between topics. Based upon this, they classify users into topical categories, which in turn can act as a foundation for measuring similarities of user pairs. In a study focusing on Twitter user sentiments (opinions), BIB010 empirically show that, under the hypothesis that connected (familiar) persons will have similar opinions, relationship information can complement what one can extract about a persons's viewpoints from their explicit utterances. This in turn can be used to improve user-level sentiment analysis.
|
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> Large volumes of spatio-temporal-thematic data being created using sites like Twitter and Jaiku, can potentially be combined to detect events, and understand various 'situations' as they are evolving at different spatio-temporal granularity across the world. Taking inspiration from traditional image pixels which represent aggregation of photon energies at a location, we consider aggregation of user interest levels at different geo-locations as social pixels. Combining such pixels spatio-temporally allows for creation of social images and video. Here, we describe how the use of relevant (media processing inspired) situation detection operators upon such 'images', and domain based rules can be used to decide relevant control actions. The ideas are showcased using a Swine flu monitoring application which uses Twitter data. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> Recently, microblogging sites such as Twitter have garnered a great deal of attention as an advanced form of location-aware social network services, whereby individuals can easily and instantly share their most recent updates from any place. In this study, we aim to develop a geo-social event detection system by monitoring crowd behaviors indirectly via Twitter. In particular, we attempt to find out the occurrence of local events such as local festivals; a considerable number of Twitter users probably write many posts about these events. To detect such unusual geo-social events, we depend on geographical regularities deduced from the usual behavior patterns of crowds with geo-tagged microblogs. By comparing these regularities with the estimated ones, we decide whether there are any unusual events happening in the monitored geographical area. Finally, we describe the experimental results to evaluate the proposed unusuality detection method on the basis of geographical regularities obtained from a large number of geo-tagged tweets around Japan via Twitter. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> We present the first comprehensive characterization of the diffusion of ideas on Twitter, studying more than 4000 topics that include both popular and less popular topics. On a data set containing approximately 10 million users and a comprehensive scraping of all the tweets posted by these users between June 2009 and August 2009 (approximately 200 million tweets), we perform a rigorous temporal and spatial analysis, investigating the time-evolving properties of the subgraphs formed by the users discussing each topic. We focus on two different notions of the spatial: the network topology formed by follower-following links on Twitter, and the geospatial location of the users. We investigate the effect of initiators on the popularity of topics and find that users with a high number of followers have a strong impact on popularity. We deduce that topics become popular when disjoint clusters of users discussing them begin to merge and form one giant component that grows to cover a significant fraction of the network. Our geospatial analysis shows that highly popular topics are those that cross regional boundaries aggressively. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> Reducing the impact of seasonal influenza epidemics and other pandemics such as the H1N1 is of paramount importance for public health authorities. Studies have shown that effective interventions can be taken to contain the epidemics if early detection can be made. Traditional approach employed by the Centers for Disease Control and Prevention (CDC) includes collecting influenza-like illness (ILI) activity data from “sentinel” medical practices. Typically there is a 1–2 week delay between the time a patient is diagnosed and the moment that data point becomes available in aggregate ILI reports. In this paper we present the Social Network Enabled Flu Trends (SNEFT) framework, which monitors messages posted on Twitter with a mention of flu indicators to track and predict the emergence and spread of an influenza epidemic in a population. Based on the data collected during 2009 and 2010, we find that the volume of flu related tweets is highly correlated with the number of ILI cases reported by CDC. We further devise auto-regression models to predict the ILI activity level in a population. The models predict data collected and published by CDC, as the percentage of visits to “sentinel” physicians attributable to ILI in successively weeks. We test models with previous CDC data, with and without measures of Twitter data, showing that Twitter data can substantially improve the models prediction accuracy. Therefore, Twitter data provides real-time assessment of ILI activity. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> We present a large-scale study of user behavior in Foursquare, conducted on a dataset of about 700 thousand users that spans a period of more than 100 days. We analyze user checkin dynamics, demonstrating how it reveals meaningful spatio-temporal patterns and offers the opportunity to study both user mobility and urban spaces. Our aim is to inform on how scientific researchers could utilise data generated in Location-based Social Networks to attain a deeper understanding of human mobility and how developers may take advantage of such systems to enhance applications such as recommender systems. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> Studying relationships between keyword tags on social sharing websites has become a popular topic of research, both to improve tag suggestion systems and to discover connections between the concepts that the tags represent. Existing approaches have largely relied on tag co-occurrences. In this paper, we show how to find connections between tags by comparing their distributions over time and space, discovering tags with similar geographic and temporal patterns of use. Geo-spatial, temporal and geo-temporal distributions of tags are extracted and represented as vectors which can then be compared and clustered. Using a dataset of tens of millions of geo-tagged Flickr photos, we show that we can cluster Flickr photo tags based on their geographic and temporal patterns, and we evaluate the results both qualitatively and quantitatively using a panel of human judges. We also develop visualizations of temporal and geographic tag distributions, and show that they help humans recognize semantic relationships between tags. This approach to finding and visualizing similar tags is potentially useful for exploring any data having geographic and temporal annotations. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> We present the first comprehensive characterization of the diffusion of ideas on Twitter, studying more than 5.96 million topics that include both popular and less popular topics. On a data set containing approximately 10 million users and a comprehensive scraping of 196 million tweets, we perform a rigorous temporal and spatial analysis, investigating the time-evolving properties of the subgraphs formed by the users discussing each topic. We focus on two different notions of the spatial: the network topology formed by follower-following links on Twitter, and the geospatial location of the users. We investigate the effect of initiators on the popularity of topics and find that users with a high number of followers have a strong impact on topic popularity. We deduce that topics become popular when disjoint clusters of users discussing them begin to merge and form one giant component that grows to cover a significant fraction of the network. Our geospatial analysis shows that highly popular topics are those that cross regional boundaries aggressively. <s> BIB007
|
Different topics on social networks receive different levels of visibility and traction at different geolocations. Further, the span of these topics, from inception of a topic to the topic passing through its lifecycle, vary across geographies, depending upon the nature and the locality of the events. Spatiotemporal analysis of microblog topics and modeling topical information diffusion in spatio-temporal settings are active research areas. Several works have attempted to analyze spatio-temporal aspects of social media and microblogs, mostly Twitter, with different angles of application. [ BIB003 BIB007 ] conduct some of the pioneering studies to characterize the spatio-temporal characteristics of diffusion of ideas on Twitter. On the subgraphs that form out of users discussing each given topic, they study two time-evolving properties: network topology of followership and geo-spatial location of users. They use Twitter data collected between June and August 2009, spanning over 10 millions users and 196 millions tweets. They infer geo-locations from GPS data and user-specified data on Twitter in form of latitude-longitude pairs, using Yahoo! Placefinder service API to resolve in terms of city, state and country. They take the hashtags as topics. Since only 10% of the tweets have a hashtag in their dataset, they also augment the set of topics by tagging tweets with entities, topics, places and other such tags, extracted using a text analytics engine (OpenCalais), and allowing a tweet to have multiple tags. They use the term event for major or minor happenings causing surge in tweeting activity of a given topic. In their model, they partition events into five divisions: pre-event phase when a topic gets initiated in the social network, growth phase when the topic is discussed by early adopters, peak phase when the topic is discussed by an early majority of individuals, decaying phase when the topic is discussed by a late majority of individuals and post-event phase the topic is discussed by laggards. They experiment with three event categories they created to perform the characterization: "popular events having 10, 000+ tweets, medium-popular events having between 500 and 10, 000 tweets and non-popular events having between 100 and 500 tweets". For each topic, they construct a subgraph (lifetime graph) of individuals who, at any time in the window, have tweeted at least once Characterizes spatio-temporal characteristics of diffusion of ideas on Twitter. Investigates network topology of followership and geo-spatial location of users on user graphs discussing a given topic. Shows that topics become popular if the follower count of the topic initiator is high and the topic is received by users having just a few followers. Shows that popular topics cross geographical boundaries, and disjoint clusters of popular topics merge to form a giant component. Identifies and characterizes topical discussions at various geographical granularities. Assigns users and tweets to locations, and creates temporal and geographical relationships across event message clusters, thereby identifying discussions. Observes geographical localization of temporal evolution of topical discussions on Twitter. Finds discussions to "evolve more at city levels compared to country levels, and more at country levels compared to globally". Analyzes spatio-temporal dynamics of user activity on Twitter. Creates a two-pass process: a content and temporal analysis module to handle micro-blog message streams and categorize them into topics, and a spatial analysis module to assign locations to the messages on the world map. Observes that the distribution of users who discussed a given event becomes global once a news media broadcasted a given news, expanding the geographical span of the locations associated with the event. Recognizes an event as a local one if it has a distribution of a high-density, and global otherwise. on the topic. They investigate a cumulative evolving graph for a topic, that captures the cumulative action of a user tweeting on the topic on at least one given day. They also study an evolving graph for a topic, which captures the action of a user tweeting on the topic on a given day. Analyzing the above graphs, they observe that popular topics aggressively cross regional boundaries, but unpopular topics do not. They hypothesize that popularity and geographical spread of topics are correlated. They count the number of regions with at least one individual mentioning a topic and plot it against the topic's popularity. The plot indicates that popular topics typically touch a higher number of regions compared to the less popular ones. In order to prove their hypothesis, in the cumulative evolving graphs, they compute the proportion of edges (u → v) for each topic, such that u and v belong to two different geographical regions. They observe that the fraction of edges that cross boundaries of geographies throughout their evolution, is high for the popular events ranging from 0.74 to 0.81 in their experiments. This fraction is observed to be low in case of medium-popular events, and very low in case of non-popular events. In summary, this part of their analysis shows that, the more popular a topic is on Twitter, the higher will be the fraction of edges crossing geographical boundaries, across all temporal phases on the event in its lifecycle of existence. Analyzing 4, 000 popular and less-popular topics, they show that, a large, connected subgraph tends to be formed by most users, discussing some popular topic on a given day. However, discussions on less popular topics tend to be restricted to disconnected clusters. They infer that "topics become popular when disjoint clusters of users discussing them begin to merge and form one giant component that grows to cover a significant fraction of the network". They find the popularity of a given topic to be high, where the number of followers of the topic initiator is high. ] conduct a geo-spatial analysis of topical discussions on unstructured microblogs, empirically demonstrating on Twitter. They identify and characterize topical discussion threads on Twitter, at different geographical granularities, specifically countries and cities. They cluster the tweets based upon topics, and draw the notions of extended (contextual) semantic and temporal relationships, from . They create geographical relationships across pairs of clusters based upon the geo-location that the constituent tweets and users belong to. In order to compute geographical relationships, they assign users and tweets to locations with certain probabilities, based upon the users profiles and tweet origins. They propose two definitions of belongingness of a cluster to a geographical region: one based upon the geographical distribution of users whose messages are included in the cluster, and the other based upon the geographical distribution of origination of the tweets that constitute the cluster. They extract geographical relationships at two granularities: cities and countries. Given location L i and event cluster E i , L i ∈ E i iff at least one microblog post M i is made from a location in L i , such that M i ∈ E i . This allows a location to be a part of multiple clusters at the same time. Each event cluster thus gets a vector of locations For each location associated with a cluster, they compute a belongingness value of the cluster to the location. This gives a belongingness value vector. They quantify the geographical relationship strength for each cluster pair, by associating geographies with the each of the belongingness value vectors. To compute belongingness, they augment the L i vector to aL i vector, where each elementL lish relationships across messages using a neighborhood generation algorithm, and use DBScan for text stream clustering. They thus continuously group messages into topics. The cluster shapes keep changing over time. They analyze the clusters to determine the hot topics from the posts. In the second pass, the spatial analysis, they assign locations to the messages on the world map, using the spatial locality characteristics of messages. Spatial locality of messages describes the high concentration of a set of messages in a specific geo-location. They record the distribution of location of topics at a given point of time using a location feature vector. They observe that the distribution of the population that discussed a given event would become global once a news media broadcasts a given news, expanding the geographical span of the location feature vector associated with the detected event. They formulate the probability of topic topic t belonging to location loc j , as In other words, they derive the probability of topic t belonging to location loc j as the ratio of the message count containing loc j in topic i (occur i, j ) to the total message count N t . Topics discussed widely across many locations are penalized with a penalty factor 1/(|loc j ∈ topic i |). They determine a candidate location by the maximum for probability of topic i as: candiLoc(topic i ) = argmax loc j {p(L = loc j |topic i )}. They compute whether a topic would be recognized as local or global, as The sparsity level and the concentricity of a given topic are traded off using a cut-off point θ . The authors note that a topic remains local if the likelihood of a candidate location crosses the threshold point. Thus, they recognize an event with high distribution density as local, and otherwise as global. They experimentally demonstrate the effectiveness of their method, over 52, 195, 773 Twitter messages collected between January 6 th 2011 and March 11 th 2011. In a study that demonstrates the real-life effectiveness on pandemic disease data that authorities use for disease control, BIB004 use Twitter data to collect related hashtag-based data pertaining to influenza-like diseases. Using user's known position (such as from 3G phone) and profile location and periodically collected data, form a spatio-temporal influenza database. Their experiments show a high (0.9846) correlation coefficient with ground-truth illness data reported to the authorities. They use this platform to develop a regression model, that effectively improves predicting influenza cases. BIB001 ] analyzes a combination of geo-spatial and temporal interest patterns on Twitter, for situation detection and control applications, from text, image and video data. They demonstrate the effectiveness of their system on a Swine Flu monitoring application. BIB006 observe the presence of meaningful temporal, geo-spatial and geotemporal tag clusters on Flickr dataset. To enable easy recognition of semantic relationships across tags by humans, they provide a visualization system for geographical and temporal tag distributions. BIB005 analyze check-in behavior and inter-checkin distances of users to several geolocations, using spatio-temporal patterns in user mobility. They also analyze activity transactions: find a likely next activity given a current activity at a location. BIB002 detect unusual geo-social events from Twitter, using geo-tagged tweets and geographical regularities from the usual crowd behavior patterns, and finding deviations from these patterns at the time under consideration.
|
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Detection and Spread of Topics <s> Twitter is a user-generated content system that allows its users to share short text messages, called tweets, for a variety of purposes, including daily conversations, URLs sharing and information news. Considering its world-wide distributed network of users of any age and social condition, it represents a low level news flashes portal that, in its impressive short response time, has the principal advantage. In this paper we recognize this primary role of Twitter and we propose a novel topic detection technique that permits to retrieve in real-time the most emergent topics expressed by the community. First, we extract the contents (set of terms) of the tweets and model the term life cycle according to a novel aging theory intended to mine the emerging ones. A term can be defined as emerging if it frequently occurs in the specified time interval and it was relatively rare in the past. Moreover, considering that the importance of a content also depends on its source, we analyze the social relationships in the network with the well-known Page Rank algorithm in order to determine the authority of the users. Finally, we leverage a navigable topic graph which connects the emerging terms with other semantically related keywords, allowing the detection of the emerging topics, under user-specified time constraints. We provide different case studies which show the validity of the proposed approach. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Detection and Spread of Topics <s> We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Detection and Spread of Topics <s> Hashtags are used in Twitter to classify messages, propagate ideas and also to promote specific topics and people. In this paper, we present a linguistic-inspired study of how these tags are created, used and disseminated by the members of information networks. We study the propagation of hashtags in Twitter grounded on models for the analysis of the spread of linguistic innovations in speech communities, that is, in groups of people whose members linguistically influence each other. Differently from traditional linguistic studies, though, we consider the evolution of terms in a live and rapidly evolving stream of content, which can be analyzed in its entirety. In our experimental results, using a large collection crawled from Twitter, we were able to identify some interesting aspects -- similar to those found in studies of (offline) speech -- that led us to believe that hashtags may effectively serve as models for characterizing the propagation of linguistic forms, including: (1) the existence of a "preferential attachment process", that makes the few most common terms ever more popular, and (2) the relationship between the length of a tag and its frequency of use. The understanding of formation patterns of successful hashtags in Twitter can be useful to increase the effectiveness of real-time streaming search algorithms. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Detection and Spread of Topics <s> We present a novel topic modelling-based methodology to track emerging events in microblogs such as Twitter. Our topic model has an in-built update mechanism based on time slices and implements a dynamic vocabulary. We first show that the method is robust in detecting events using a range of datasets with injected novel events, and then demonstrate its application in identifying trending topics in Twitter. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Detection and Spread of Topics <s> Social networks play a fundamental role in the diffusion of information. However, there are two different ways of how information reaches a person in a network. Information reaches us through connections in our social networks, as well as through the influence external out-of-network sources, like the mainstream media. While most present models of information adoption in networks assume information only passes from a node to node via the edges of the underlying network, the recent availability of massive online social media data allows us to study this process in more detail. We present a model in which information can reach a node via the links of the social network or through the influence of external sources. We then develop an efficient model parameter fitting technique and apply the model to the emergence of URL mentions in the Twitter network. Using a complete one month trace of Twitter we study how information reaches the nodes of the network. We quantify the external influences over time and describe how these influences affect the information adoption. We discover that the information tends to "jump" across the network, which can only be explained as an effect of an unobservable external influence on the network. We find that only about 71% of the information volume in Twitter can be attributed to network diffusion, and the remaining 29% is due to external events and factors outside the network. <s> BIB005
|
Topics are identified using: (a) hashtags of microblogs like Twitter (ex: BIB003 ), (b) bursty keyword identification (ex: BIB001 and BIB002 ), and (c) probability distributions of latent concepts over keywords in user generated content (ex: BIB004 ). Bursty topics are often treated as trending topics for modeling and analysis. The shortcomings in topic detection related literature appear to be the following. Consideration of social influence: Literature exploring the impact of influence on emergence of topics leaves many questions open. A better understanding is needed on, whether users having general and topic-specific influence create long-lasting topics and high information outreach. How do structures such as communities emerge from social connections? What is the role of influences around topics there? Do topics created by different influencers tend to spread together or compete with each other? What is the social relationship of influencers in such setting? Managing topic complexity along with scale of detection: Hashtags and bursty keywords, two of the popular methods to identify topics/trends, often represent simple single-word concepts. These are often not disambiguated, leading to information loss. For instance, #IITDelhi and #IITDelhiIndia are conceptually the same "topics" (or trends), and yet mostly treated as different topics in literature. No work unifies such concepts automatically ( BIB003 ] unifies manually). Algorithms to detect topics as probability distributions over n-gram concept sets do not scale enough to cover a large enough fraction of social network messages fast. Identifying complex topics fast and at scale, while representing without information loss, needs research focus. Information-rich multimedia data analysis: There is space to improve the state of the art of topic detection, by considering not just text but also other kinds of inputs such as images and videos, for detection topics of interest and thereby conducting analyses. One could also consider the commonalities of the types of resources shared, such as objects that the URLs shared by the users point to, in order for topic detection. The existing literature has not explored this. Consideration of state of the social network: Topics may not necessarily emerge from external events. Topics might get created because of the state that a given social network already is in. This is not yet explored in the literature. In such settings, the state of the social network can be determined by the prior set of topics, ongoing discussions, set of participants, their social relationships and other relevant attributes, and be filtered via aspects such as geographies and communities. Defining Discussions: The literature mostly assumes that a microblog discussion is nothing but a topic (such as a Twitter hashtag) being mentioned by members of a social network, without attempting to define discussions and validate any such definition. Some research works, such as ], attempt to define discussions using message clustering and temporal filters. However, attention is clearly required to better define discussions, and justify such definitions. The closed-world assumption: Literature usually treats topic lifecycle and information diffusion as incidents internal to given social networks, as a closed world. However, a preliminary study by BIB005 , shows significant impact of external information sources, on information diffusion. This necessitates conducting a deeper study of external impact on information diffusion, and exploring the validity of the closed world assumption that most of the literature assumes.
|
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> It is shown how globally stable model reference adaptive control systems may be designed when one has access to only the plant's input and output signals. Controllers for single input-single output, nonlinear, nonautonomous plants are developed based on Lyapunov's direct method and the Meyer-Kalman-Yacubovich lemma. Derivatives of the plant output are not required, but are replaced by filtered derivative signals. An augmented error signal replaces the error normally used, which is defined as the difference between the model and plant outputs. However, global stability is assured in the sense that the normally used error signal approaches zero asymptotically. <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> The paper considers the control of an unknown linear time-invariant plant using Direct and Indirect Model Reference Adaptive Control. Employing a specific controller structure and the concept of positive realness, adaptive laws are derived using Indirect Control which are identical to those obtained in the case of Direct Control. The stability questions that arise are also shown to be the same. Simulation results using the new scheme are presented for the control of both stable and unstable plants. <s> BIB002 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> This paper establishes global convergence for a class of adaptive control algorithms applied to discrete time multi-input multi-output deterministic linear systems. It is shown that the algorithms will ensure that the system inputs and outputs remain bounded for all time and that the output tracking error converges to zero. <s> BIB003 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> Progress in theory and applications of adaptive control is reviewed. Different approaches are discussed with particular emphasis on model reference adaptive systems and self-tuning regulators. Techniques for analysing adaptive systems are discussed. This includes stability and convergence analysis. It is shown that adaptive control laws can also be obtained from stochastic control theory. Issues of importance for applications are covered. This includes parameterization, tuning, and tracking, as well as different ways of using adaptive control. An overview of applications is given. This includes feasibility studies as well as products based on adaptive techniques. <s> BIB004 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> 1. Introduction.- 2. Continuous-time identifiers and adaptive observers.- 3. Discrete-time identifiers.- 4. Robustness improvement of identifiers and adaptive observers.- 5. Adaptive control in the presence of disturbances.- 6. Reduced-order adaptive control.- 7. Decentralized adaptive control.- 8. Reduced order-decentralized adaptive control.- Corrections. <s> BIB005 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> This unified survey focuses on linear discrete-time systems and explores the natural extensions to nonlinear systems. In keeping with the importance of computers to practical applications, the authors emphasize discrete-time systems. Their approach summarizes the theoretical and practical aspects of a large class of adaptive algorithms.1984 edition. <s> BIB006 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> An algorithm is proposed for self-tuning optimal fixed-lag smoothing or filtering for linear discrete-time multivariable processes. A z -transfer function solution to the discrete multivariable estimation problem is first presented. This solution involves spectral factorization of polynomial matrices and assumes knowledge of the process parameters and the noise statistics. The assumption is then made that the signal-generating process and noise statistics are unknown. The problem is reformulated so that the model is in an innovations signal form, and implicit self-tuning estimation algorithms are proposed. The parameters of the innovation model of the process can be estimated using an extended Kalman filter or, alternatively, extended recursive least squares. These estimated parameters are used directly in the calculation of the predicted, smoothed, or filtered estimates. The approach is an attempt to generalize the work of Hagander and Wittenmark. <s> BIB007 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> We propose a new model reference adaptive control algorithm and show that it provides the robust stability of the resulting closed-loop adaptive control system with respect to unmodeled plant uncertainties. The robustness is achieved by using a relative error signal in combination with a dead zone and a projection in the adaptive law. The extra a priori information needed to design the adaptive law, are bounds on the plant parameters and an exponential bound on the impulse response of the inverse plant transfer function. <s> BIB008 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> Stability theory simple adaptive systems adaptive observers the control problem persistent excitation error models robust adaptive control the control problem - relaxation of assumptions multivariable adaptive systems applications of adaptive control. <s> BIB009 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> 1. Introduction. Control System Design Steps. Adaptive Control. A Brief History. 2. Models for Dynamic Systems. Introduction. State-Space Models. Input/Output Models. Plant Parametric Models. Problems. 3. Stability. Introduction. Preliminaries. Input/Output Stability. Lyapunov Stability. Positive Real Functions and Stability. Stability of LTI Feedback System. Problems. 4. On-Line Parameter Estimation. Introduction. Simple Examples. Adaptive Laws with Normalization. Adaptive Laws with Projection. Bilinear Parametric Model. Hybrid Adaptive Laws. Summary of Adaptive Laws. Parameter Convergence Proofs. Problems. 5. Parameter Identifiers and Adaptive Observers. Introduction. Parameter Identifiers. Adaptive Observers. Adaptive Observer with Auxiliary Input. Adaptive Observers for Nonminimal Plant Models. Parameter Convergence Proofs. Problems. 6. Model Reference Adaptive Control. Introduction. Simple Direct MRAC Schemes. MRC for SISO Plants. Direct MRAC with Unnormalized Adaptive Laws. Direct MRAC with Normalized Adaptive Laws. Indirect MRAC. Relaxation of Assumptions in MRAC. Stability Proofs in MRAC Schemes. Problems. 7. Adaptive Pole Placement Control. Introduction. Simple APPC Schemes. PPC: Known Plant Parameters. Indirect APPC Schemes. Hybrid APPC Schemes. Stabilizability Issues and Modified APPC. Stability Proofs. Problems. 8. Robust Adaptive Laws. Introduction. Plant Uncertainties and Robust Control. Instability Phenomena in Adaptive Systems. Modifications for Robustness: Simple Examples. Robust Adaptive Laws. Summary of Robust Adaptive Laws. Problems. 9. Robust Adaptive Control Schemes. Introduction. Robust Identifiers and Adaptive Observers. Robust MRAC. Performance Improvement of MRAC. Robust APPC Schemes. Adaptive Control of LTV Plants. Adaptive Control for Multivariable Plants. Stability Proofs of Robust MRAC Schemes. Stability Proofs of Robust APPC Schemes. Problems. Appendices. Swapping Lemmas. Optimization Techniques. Bibliography. Index. License Agreement and Limited Warranty. <s> BIB010
|
First attempts at using adaptive control techniques were developed during the sixties and were based on intuitive and even ingenious ideas , ), yet they ended in failure, mainly because at the time there was not very much knowledge of stability analysis with nonstationary parameters. Modern methods of stability analysis that had been developed by Lyapunov at the start of the 19th century were not broadly known, much less used, in the West . After the initial problems with adaptive control techniques of the sixties, stability analysis has become a center point in new developments related to adaptive control. Participation of some of the leading researchers in the control community at the time, such as Narendra, Landau,Å ström, Kokotovic, Goodwin, Morse, Grimble and many others, added a remarkable contribution to the better modeling and to the understanding of adaptive control methodologies BIB001 , (vanAmerongen and TenCate, 1975) , , , , , , BIB002 , , BIB009 , BIB003 , BIB006 , BIB004 , (Astrom and Wittenmark, 1989) , BIB005 , BIB007 , (Mareels, 1984) , BIB008 , , , BIB010 , (Bitmead, Gevers and Wertz, 1990) , , . New tools and techniques have been developed and used and they finally led to successful proofs of stability, mainly based on the Lyapunov stability approach. The standard methodology was the Model Reference Adaptive Control approach which, as its name states, basically requires the possibly "bad" plant to follow the behavior of a "good" Model Reference. y m (t) = C m x m (t) The control signal that feeds the plant is a linear combination of the Model state variables If the plant parameters were fully known, one could compute the corresponding controller gains that would force the plant to asymptotically follow the Model, or and correspondingly Because the entire plant state ultimately behaves exactly as the model state, MRAC is sometimes interpreted as Pole-Zero placing. However, in this report we only relate to MRAC in relation to its main aim, namely, the plant output should follow the desired behavior represented by the model output. When the plant parameters are not (entirely) known, one is naturally lead to use adaptive control gains. The basic idea is that the plant is fed a control signal that is a linear combination of the model state through some gains. If all gains are correct, the entire plant state vector The resulting "tracking error" can be monitored and used to generate adaptive gains. The basic idea of the adaptation is like that: assume that one component of the control signal that is fed to the plant is coming from the variable x mi through the gain k xi . If the gain is not perfectly correct, this component contributes to the tracking error and therefore the tracking error and the component x mi are correlated. This correlation is used to generate the adaptive gaiṅ where γ i is a parameter that affects the rate of adaptation. The adaptation should continues until the correlation diminishes and ultimately vanishes and therefore the gain derivative tends to zero and the gain itself is (hopefully) supposed to ultimately reach a constant value. In vectorial form, As Figure 1 below shows, there are various other components that can be added to improve the performance of the MRAC system such aṡ so the total control signal is Many other elements, such as adaptive observers, etc., can be added to this basic MRAC scheme and can be found in the reference cited above, yet here we want to pursue just the basic Model Reference idea. This approach was able to generate some rigorous proofs of stability that showed that not only the tracking error but even the entire state error asymptotically vanishes. This result implied that the plant behavior would asymptotically reproduce the stable model behavior and would ultimately achieve the desired performance represented by the ideal Model Reference. In particular, the Lyapunov stability technique revealed the prior conditions that had to be satisfied in order to guarantee stability and allowed getting rigorous proofs of stability of the adaptive control system. Because along with the dynamics of the state or the state error, adaptive control systems have also introduced the adaptive gains dynamics, the positive definite quadratic Lyapunov function had to contain both the errors and the adaptive gains and usually had the form Here, K is a set of the ideal gains that could perform perfect model following if the parameters were known, and that the adaptive control gains were supposed to asymptotically reach. Yet, in spite of successful proofs of stability, very little use has been made of adaptive control techniques in practice. Therefore, we will first discuss some of the problems that are inherent to the classical MRAC approach and that are emphasized when one intends to use adaptive methods with such applications as large flexible space structures and similar large scale systems. First, the fact that the entire plant state vector is supposed to follow the behavior of the model state vector immediately implies that the model is basically supposed to be of the same order as the plant. If this is not the case, various problems have been shown to appear, including total instability. As real world plants are usually of very high order when compared with the nominal plant model, a so-called "unmodeled dynamics" must inherently be considered in the context of this approach. The developers of adaptive control techniques were able to show that the adaptive system still demonstrates stability robustness in spite of the "unmodeled dynamics," yet to this end they required that the "unmodeled dynamics" be "sufficiently small." Furthermore, if any state variable of the Model reference is zero, the corresponding adaptive gain is also zero. Also, if the model reaches a steady state, some of the various adaptive gains loose their independence, and this point raises the need for some "persistent excitation" or "sufficient excitation." It should be emphasized that the need for sufficiently large Models, sufficiently small "unmodeled dynamics" and "sufficient excitation" appear even if one only intends to guarantee the mere stability of the plant, before even mentioning performance. Finally, when all these basic conditions are satisfied, the stability of the adaptive control could initially be proved only if the original plant was Strictly Passive (SP), which in LTI systems implies that its input-output transfer function is Strictly Positive Real (SPR). Passivity-like conditions appear in various forms in different presentations, so they deserve a special section.
|
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> Definition 1. <s> Frequency domain conditions for strictly positive real (SPR) functions which appear in literature are often only necessary or only sufficient. This point is raised in [1], [2], where necessary and sufficient conditions in the s -domain are given for a transfer function to be SPR. In this note, the points raised in [1], I2] are clarified further by giving necessary and sufficient conditions in the frequency domain for transfer functions to be SPR. These frequency-domain conditions are easier to test than those given in the s -domain or time domain [1], [2]. <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> Definition 1. <s> ABSTRACT Simple adaptive control systems were recently shown to be globally stable and to maintain robustness with disturbances if the controlled system is “almost strictly positive real” namely, if there exists a constant output feedback (unknown and not needed for implementation) such that the resulting closed loop transfer function is strictly positive real. In this paper it is shown how to use parallel feedforward and the stabi 1izability properties of systems in order to satisfy the “almost positivity” condition. The feedforward configuration may be constant, if some prior knowledge is given, or adaptive, in general. This way, simple adaptive controllers can be implemented in a large number of complex control systems, without requiring the order of the plant or the pole-excess as prior knowledge. <s> BIB002 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> Definition 1. <s> The concepts of G-passivity and G-passifiability (feedback G-passivity) are introduced extending the concepts of passivity and passifiability to nonsquare systems (systems with different numbers of inputs and outputs). Necessary and sufficient conditions for strict G-passifiability of nonsquare linear systems by output feedback are given. Simple description of a broad subclass of passifying feedbacks is proposed. The proofs are based on a version of the celebrated Yakubovich-Kalman-Popov lemma. <s> BIB003 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> Definition 1. <s> A recent publication states and proves the conditions under which a linear time-invariant system, with state-space realization A,B,C, can be made strictly positive real via constant output feedback. This note is intended to briefly present the development of the proof and to give due credit to the first proofs of this statement. <s> BIB004
|
A linear time-invariant system with a state-space realization {A, B, C}, where A ∈ R n * n , B ∈ R n * m , C ∈ R m * n , with the m*m transfer function T (s) = C(sI − A) −1 B, is called "strictly passive (SP)" and its transfer function "strictly positive real (SPR)" if there exist two positive definite symmetric (PDS) matrices, P and Q, such that the following two relations are simultaneously satisfied: The relation between the strict passivity conditions (16)-(17) and the strict positive realness of the corresponding transfer function has been treated elsewhere BIB001 , . Relation (16) is the common algebraic Lyapunov equation and shows that an SPR system is asymptotically stable. One can also show that conditions (16)- (17) also imply that the system is strictly minimum-phase, yet simultaneous satisfaction of both conditions (16)- (17) is far from being guaranteed even in stable and minimum-phase systems, and therefore the SPR condition seemed much too demanding. (Indeed, some colleagues in the general control community use to ask: if the system is already asymptotically stable and minimum-phase, why would one need adaptive controllers?) For a long time, the passivity condition had been considered very restrictive (and rather obscure) and at some point the adaptive control community has been trying to drop it and to do without it. The passivity condition has been somewhat mitigated when it was shown that stability with adaptive controllers could be guaranteed even for the non-SPR system (1)- (2) if there exists a constant output feedback gain (unknown and not needed for implementation), such that a fictitious closed-loop system with the system matrix is SPR, namely, it satisfy the passivity conditions (16)- (17). Because in this case the original system (1)- (2) was only separated by a simple constant output feedback from strict passivity, it was called "Almost Strictly Positive Real (ASPR)" or "Almost Strictly Passive (ASP)" , BIB002 . Note that such ASP systems are sometimes called BIB003 , (FradkovHill, 1998 ) "feedback passive" or "passifiable." However, as we will show that any stabilizable system is also passifiable via parallel feedforward, those systems that are only at the distance of a constant feedback gain from Strict Passivity deserve a special name. At the time, this "mitigation" of the passivity conditions did not make a great impression, because it was still not clear what systems would satisfy the new conditions. (Some even claimed that if SPR seemed to be another name for the void class of systems, the "new" class of ASPR was only adding the boundary.) Nonetheless, some ideas were available. Because a constant output gain feedback was supposed to stabilize the system, it seemed apparent that the original plant was not required to be stable. Also, because it was known that SPR systems were minimum-phase and that the product CB is Positive Definite Symmetric (PDS), it was intuitive to assume that minimumphase systems with Positive Definite Symmetric CB were natural ASPR candidates . Indeed, simple Root-locus techniques were sufficient to proof this result in SISO systems, and many examples of minimumphase MIMO systems with CB product PDS were shown to be ASPR , BIB002 . However, it was not clear how many of such MIMO system actually were ASPR. Because the ASPR property can be stated as a simple condition and because it is the main condition needed to guarantee stability with adaptive controllers, it is useful to present here the ASPR theorem for the general multi-input-multi-output systems: Theorem 1. Any linear time-invariant system with the state-space realization {A, B, C}, where A ∈ R n * n B ∈ R n * m ,C ∈ R m * n , with the m*m transfer function T (s) = C(sI − A) −1 B, that is minimum-phase and where the matrical product CB is PDS, is "almost strictly passive (ASP)" and its transfer function "almost strictly positive real (ASPR)." Although the original plant is not SPR, a (fictitious) closed-loop system satisfies the SPR conditions, or in other words, there exist two positive definite symmetric (PDS) matrices, P and Q, and a positive definite gain such that the following two relations are simultaneously satisfied: As a matter of fact, a proof of Theorem 1 had been available in the Russian literature since 1976 yet it was not known in the West. Here, many other works have later independently rediscovered, reformulated, and further developed the idea (see BIB004 and references therein for a brief history and for a simple and direct, algebraic, proof of this important statement). Even as late as 1999, this simple ASPR condition was still presented as some algebraic condition that might look obscure to the control practitioner. On the other hand, managed to add an important contribution and emphasize the special property of ASPR systems by proving that if a system cannot be made SPR via constant output feedback, no dynamic feedback can render it SPR. Theorem 1 has thus managed to explain the rather obscure passivity conditions with the help of new conditions that could be understood by control practitioners. It is useful to notice an important property that may makes an ASPR system to be a good candidate for stable adaptive control: if a plant is minimum-phase and its input-output matrical product CB is Positive Definite Symmetric (PDS) it is stabilizable via some static Positive Definite (PD) output feedback. Furthermore, if the output feedback gain is increased beyond some minimal value, the system remains stable even if the gain increase is nonstationary. The required positivity of the product CB could be expected, as it seemed to be a generalization of the sign of the transfer function that allows using negative feedback in SISO systems. However, although at the time it seemed to be absolutely necessary for the ASPR conditions, the required CB symmetry proved to be rather difficult to fulfill in practice, in particular in adaptive control systems where the plant parameters are not known. After many attempts that have ended in failure, a recent publication has managed to eliminate the need for a symmetric CB. First, it was easy to observe that the Lyapunov function remains positive definite if the gain term is rewritten as follows:
|
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> COUNTEREXAMPLES TO MODEL REFERENCE ADAPTIVE CONTROL <s> This paper addresses the problem of designing model-reference adaptive control for linear MIMO systems with unknown high-frequency gain matrix (HFGM). The concept of hierarchy of control is introduced leading to a new control parametrization and an error equation with triangular HFGM, which allows a sequential design of the adaptation scheme. Significant reduction of the prior knowledge about the HFGM is achieved, overcoming the limitations of the known methods. A complete stability and convergence analysis is developed based on a new class of signals and their properties. Exponential stability is guaranteed under explicit persistency of excitation conditions. <s> BIB001
|
In the examples of BIB001 , a 2*2 stable plant with CB positive definite is required to follow the behavior of a stable model of same order. In fact both the plant and the model have the same diagonal system matrices with negative eigenvalues, and only the input-output matrix differentiates between the two. The plant, that appears in a 2D adaptive robotic visual servoing with uncalibrated camera, is defined by the system matrices It is shown BIB001 ) that standard MRAC systems become unstable even though the MRAC system was supposed to be stable because there was no "unmodeled dynamics," there was "sufficient excitation," and the assumably "sufficient" passivity conditions were also satisfied. We note that BIB001 shows ways to avoid the problem and, using various kinds of prior knowledge, other solutions have also been proposed.
|
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> SIMPLE ADAPTIVE CONTROL (SAC), OR THE SIMPLIFIED APPROACH TO MODEL REFERENCE ADAPTIVE CONTROL <s> Adaptive model reference procedures which do not require explicit parameter identification are considered for large scale systems. Such application is feasible provided that there exists a feed-back gain matrix such that the resulting input-output transfer function is strictly positive real. Consideration of a simply supported beam shows the positive real condition to be satisfied for velocity and velocity plus scaled positional outputs sensed at the same points where the actuators are positioned. Results show the adaptive algorithm to indeed be capable of satisfactory output model following performance with all beam states stabilized. <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> SIMPLE ADAPTIVE CONTROL (SAC), OR THE SIMPLIFIED APPROACH TO MODEL REFERENCE ADAPTIVE CONTROL <s> Model reference adaptive control procedures that do not require explicit parameter identification are considered for large structural systems. Although such applications have been shown to be feasible for mu Hi variable systems, provided there exists a feedback gain matrix which makes the resulting input/output transfer function strictly positive real, it is now shown that this constraint is overly restrictive and that only positive realness is required. Subsequent consideration of a simply supported beam shows that if actuators and sensors are collocated, then the positive realness constraint will be satisfied and the model reference adaptive control will then indeed be suitable for velocity following when only velocity sensors are available and for both position and velocity following when velocity plus scaled position outputs are measured. In both cases, all states, regardless of system dimension, are guaranteed to be stable. HE need for parameter estimation and/or adaptive control of any system arises because of ignorance of the system's internal structure and critical parameter values, as well as changing control regimes. A large structural system (LSS) is substantially more susceptible to these problems. The most crucial problem of adaptive control of large structures is that the plant is very large or infinite-dime nsional and, consequently, the adaptive controller must be based on a loworder model of the system in order to be implemented with an on-line/onboard computer. However, any controller based on a reduced-order model (ROM) must operate in closed loop with the actual system; thus it interacts not only with the ROM but also with the residual subsystem (through the spillover and model error terms). One particular adaptive algorithm that seems applicable to LSS is the direct (or implicit) model reference-based approach taken by Sobel et al. 1'2 In particular, using command generator tracker (CGT) theory,3 with Lyapunov stabilitybased design procedures, they were able to develop for step commands a model reference adaptive control (MRAC) algorithm that, without the need for parameter identification, forced the error between plant and model (which need not be of the same order as the plant) to approach zero, provided that certain plant/model structural conditions are satisfied. Such an adaptation algorithm is very attractive for the control of large structural systems since it eliminates the need for explicitly identifying the large number of modes that must be modeled, and, furthermore, eliminates the spillover effects. Relative to the conditions that must be satisfied, it was shown that asymptotic stability results provided that the plant input/output transfer matrix is strictly positive real for some feedback gain matrix and provided that there exists a bounded solution to the corresponding deterministic CGT problem. Such a solution, however, does not always exist for structural problems with velocity sensors and, furthermore, the transfer matrix for structural systems is positive real (not strictly positive real) for collocated actuators and rate sensors.4 <s> BIB002 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> SIMPLE ADAPTIVE CONTROL (SAC), OR THE SIMPLIFIED APPROACH TO MODEL REFERENCE ADAPTIVE CONTROL <s> Introducing readers to adaptive systems in a rigorous but elementary fashion, this text emphasizes the mainstream developments in adaptive control and signal processing of linear discrete time systems. A unified framework is developed whereby the reader can analyze and understand any adaptive system in the literature. The so-called equilibrium analysis facilitates an understanding of the limitations and potential of adaptive systems in a transparent fashion; while the behavioural approach to linear systems plays an essential role at some key points in the text. So-called universal controllers are presented in some detail. Each chapter is accompanied by exercises that aim to develop certain aspects of the theory, as well as to give the reader a better understanding of the actual behaviour of adaptive systems. <s> BIB003
|
Various kinds of additional prior knowledge have been used and many solutions and additions have been proposed to overcome some of the various drawbacks of the basic MRAC algorithm. However, this paper sticks to the very basic idea of Model Following. Next sections will show that those basically ingenious adaptive control ideas and the systematic stability analysis they introduced had finally led to adaptive control systems that can guarantee stability robustness along with superior performance when compared with alternative, non-adaptive, methodologies. In this section we will first assume that at least one of the passivity conditions presented above holds and will deal with a particular methodology that managed to eliminate the need for the plant order and therefore can mitigate the problems related to "unmodeled dynamics" and "persistent excitation." Subsequent sections will then extend the feasibility of the methodology to those real-world systems that do not inherently satisfy the passivity conditions. The beginning of the alternative adaptive control approach can be found in the intense activities at Rensselaer (RPI) during [1978] [1979] [1980] [1981] [1982] [1983] . At that time, such researchers as Kaufman, Sobel, Barkana, Balas, Wen, and others (Sobel, Kaufman and Mabus, 1982) , BIB001 , , BIB002 , BIB002 , were trying to use customary adaptive control techniques with large order MIMO systems, such as planes, large flexible structures, etc. It did not take long to realize that it was impossible to even think of controllers of the same order as the plant, or even of the order of a "nominal" plant. Besides, those were inherently MIMO systems, while customary MRAC techniques at the time were only dealing with SISO systems. Because now the very reduced-order model could not be considered to be even close to the plant, one could not consider full model state following, so this aim was naturally replaced by output model following. Furthermore, as the (possibly unstable) large-order plant state could not be compared with the reduced-order model state, the model could not be thought to guarantee asymptotic stability of the plant any longer. In order to allow stability of the reduced order adaptive control system, new adaptive control components that were not deemed to be needed by the customary MRAC had to be considered. We will show that this "small" addition had an astonishing effect towards the successful application of the modified MRAC. In brief, as it was known that stability of adaptive control systems required that the plant be stabilizable via a constant gain feedback, the natural question was why not using this direct output feedback. Following this idea, an additional adaptive output feedback term was added to the adaptive algorithm that otherwise is very similar to the usual MRAC algorithms, namely, where we denote the reference vector Subsequently in this paper, it will be shown that the new approach uses the model as a Command Generator and therefore it is sometime called Adaptive Command Generator Tracker. Because it also uses low-order models and controllers, it was ultimately called Simple Adaptive Control (SAC). Before we discuss the differences between the new SAC approach and to adaptive control classical MRAC, it is useful to first dwell over the special role of the direct output feedback term. If the plant parameters were known, one could choose an appropriate gain K e and stabilize the plant via constant output feedback control As we already mentioned above, it was known that an ASPR system (or, as we now know, a minimum-phase plant with appropriate CB product) could be stabilized by a positive definite output feedback gain. Furthermore, it was known that ASPR systems are high-gain stable, so stability of the plant is maintained if the gain value happens to go arbitrarily high beyond some minimal value. Whenever one may have sufficient prior knowledge to assume that the plant is ASPR, yet does not have sufficient knowledge to choose a good control gain, one can use the output itself to generate the adaptive gain by the rule: and the control In the more general case when the plant is required to follow the output of the model, one would use the tracking error to generate the adaptive gainK and the control We will show how this adaptive gain addition is able to avoid some of the most difficult inherent problems related to the standard MRAC and to add robustness to its stability. Although it was developed as a natural compensation for the low-order models and was successfully applied at Rensselaer as just one element of the Simple (Model Reference) Adaptive Control methodology, it is worth mentioning that, similarly to the first proof of the ASPR property, the origins of this specific adaptive gain can again be found in an early Fradkov's work in the Russian literature. Besides, later on this gain received a second birth and became very popular after 1983 in the context of adaptive control "when the sign of high-frequency gain is unknown." In this context , , and after a very rigorous mathematical treatment , it also received a new name and it is sometimes called the ByrnesWillems gain. Its useful properties have been thoroughly researched and some may even call this one adaptive gain Simple Adaptive Control as they were apparently able to show that it can do "almost" everything (Ilchman, Owens and PratzelWolters, 1987) , BIB003 . Indeed, if an ASPR system is high-gain stable, it seems very attractive to let the adaptive gain increase to even very high values in order to achieve good performance that is represented by small tracking errors. However, although at first thought one may find that high gains are very attractive, a second thought and some more engineering experience with the real world applications make it clear that high gains may lead to saturations and may excite vibrations and other disturbances. These disturbances may not have appeared in the nominal plant model that was used for design and may not be felt in the realworld plant unless one uses those very high gains. Furthermore, as the motor or the plant dynamics would always require an input signal in order to keep moving and tracking the desired trajectory, it is quite clear that the tracking error cannot be zero or very small unless one uses very high gains indeed. Designers of tracking systems know that feedforward signals that come from the desired trajectory can help achieving low-error or even perfect tracking without requiring the use of dangerously high gains (and, correspondingly, exceedingly high bandwidth) in the closed loop. In the non-adaptive world, feedforward could be problematic because unlike the feedback loop, any errors in the feedforward parameters are directly and entirely transmitted to the output tracking error. Here, the adaptive control methodology can demonstrate an important advantage on the nonadaptive techniques, because the feedforward parameters are finely tuned by the very tracking er- ror they intend to minimize. The issues discussed here and the need for feedforward again seem to show the intrinsic importance of the basic Model Following idea, and again point to the need for a model. However, the difference between the model used by SAC and the Model Reference used by the standard MRAC is that this time the so-called "Model" does not necessarily have to reproduce the plant besides incorporating the desired inputoutput behavior of the plant. At the extreme, it could be just a first-order pole that performs a reasonable step-response, or otherwise a higher order system, just sufficiently high to generate the desired trajectory. As it generates the command, this "model" can also be called "Command Generator" (Brousard and Berry, 1978) and the corresponding technique "Command Generator Tracker (CGT)." In summary, the adaptive control system monitors all available data, namely, the tracking error, the model states and the model input command and uses them to generate the adaptive control signal (Figure 2 ) that using the concise notations (27)- (28) giveṡ and the control It is worth noting that, initially, SAC seemed to be a very modest alternative to MRAC with apparently very modest aims and that also seemed to be very restricted by new conditions. Although at the time it probably was the only adaptive technique that could have been used in MIMO systems and with such large systems as large flexible structures, and therefore was quite immediately adopted by many researchers and practitioners, the SAC approach got a cold reception and for a long time has been largely ignored by the mainstream adaptive control. In retrospective (besides some lack of good selling) at the time this cold reception had some good reasons. Although it was called "simple" as it was quite simple to implement, the theory around SAC was not simple and many tools that were needed to support its qualities and that, slowly and certainly, revealed themselves over the year, were still missing. It subsequently not only required developing new analysis tools but also, probably even more important, better expertise at understanding their implications before they could be properly used so that they ultimately managed to highlight the very useful properties of SAC. Finally, based on developments that had spanned over more than 25 years, we will attempt to show that SAC is in fact the stable MRAC, because right from the beginning it avoids some difficulties that are inherent in the standard MRAC. First, it is useful to notice that because there is no attempt at comparison between the order or the states of the plant and the model, there is no "unmodeled dynamics." Also, because basically the stability of the system rests on the direct output feedback adaptive gain, the model is immaterial in this context and of course there is no need to mention "sufficient excitation." Besides, as we will later show and as it was observed by almost all practitioners that have tried to use it, SAC proved to be good control. While the standard MRAC may have to explain why it does not work when it is supposed to work, SAC may have to explain why it does work even in cases when the (sufficient) conditions are not fully satisfied. Although, similarly to any nonstationary control, in Adaptive Control it is very difficult to find the very minimal conditions that would keep the system stable, it can be shown why SAC may demonstrate some robustness even when the basic sufficient conditions are not satisfied. We note that this last point is just an observation based on experience, yet we must also note that in those cases when the basic conditions are fulfilled, they are always sufficient to guarantee the stability of the adaptive control system, with no exceptions and no counterexamples. In this respect, one can show that the MRAC "counterexamples" become just trivial, stable, and well behaving examples for SAC.
|
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> PROOF OF STABILITY OF SIMPLE ADAPTIVE CONTROL <s> Model reference adaptive control procedures that do not require explicit parameter identification are considered for large structural systems. Although such applications have been shown to be feasible for mu Hi variable systems, provided there exists a feedback gain matrix which makes the resulting input/output transfer function strictly positive real, it is now shown that this constraint is overly restrictive and that only positive realness is required. Subsequent consideration of a simply supported beam shows that if actuators and sensors are collocated, then the positive realness constraint will be satisfied and the model reference adaptive control will then indeed be suitable for velocity following when only velocity sensors are available and for both position and velocity following when velocity plus scaled position outputs are measured. In both cases, all states, regardless of system dimension, are guaranteed to be stable. HE need for parameter estimation and/or adaptive control of any system arises because of ignorance of the system's internal structure and critical parameter values, as well as changing control regimes. A large structural system (LSS) is substantially more susceptible to these problems. The most crucial problem of adaptive control of large structures is that the plant is very large or infinite-dime nsional and, consequently, the adaptive controller must be based on a loworder model of the system in order to be implemented with an on-line/onboard computer. However, any controller based on a reduced-order model (ROM) must operate in closed loop with the actual system; thus it interacts not only with the ROM but also with the residual subsystem (through the spillover and model error terms). One particular adaptive algorithm that seems applicable to LSS is the direct (or implicit) model reference-based approach taken by Sobel et al. 1'2 In particular, using command generator tracker (CGT) theory,3 with Lyapunov stabilitybased design procedures, they were able to develop for step commands a model reference adaptive control (MRAC) algorithm that, without the need for parameter identification, forced the error between plant and model (which need not be of the same order as the plant) to approach zero, provided that certain plant/model structural conditions are satisfied. Such an adaptation algorithm is very attractive for the control of large structural systems since it eliminates the need for explicitly identifying the large number of modes that must be modeled, and, furthermore, eliminates the spillover effects. Relative to the conditions that must be satisfied, it was shown that asymptotic stability results provided that the plant input/output transfer matrix is strictly positive real for some feedback gain matrix and provided that there exists a bounded solution to the corresponding deterministic CGT problem. Such a solution, however, does not always exist for structural problems with velocity sensors and, furthermore, the transfer matrix for structural systems is positive real (not strictly positive real) for collocated actuators and rate sensors.4 <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> PROOF OF STABILITY OF SIMPLE ADAPTIVE CONTROL <s> Recent publications have presented successful implementations of simple direct adaptive control techniques in various applications. However, they also expose the fact that the convergence of the adaptive gains has remained uncertain. The gains may not converge to the ideal constant control gains predicted by the underlying linear time-invariant system considerations. As those prior conditions that were also needed for stability may not hold, this conclusion may raise doubts about the robustness of the adaptive system. This paper intends to show that the adaptive control performs perfect tracking even when the linear time-invariant solution does not exist. It is shown that the adaptation performs a ‘steepest descent’ minimization of the errors, ultimately ending with the appropriate set of control gains that fit the particular input command and initial conditions. The adaptive gains do asymptotically reach an appropriate set of bounded constant ideal gain values that solve the problem at task. Copyright © 2004 John Wiley & Sons, Ltd. <s> BIB002
|
One can easily prove that the WASP conditions are sufficient to prove stability using just the simple adaptive output feedback gain (32) . However, in order to avoid any misunderstandings related to the role of the unknown matrix W , here we chose to present a rigorous proof of stability for the general output model tracking case. As usual in adaptive control, one first assumes that the underlying fully deterministic output model tracking problem is solvable. A recent publication BIB002 shows that if the Model Reference uses a step input in order to generate the desired trajectory, the underlying tracking problem is always solvable. If, instead, the model input command is itself generated by an unknown system of order n u , the model is required to be sufficiently large to accommodate this command BIB001 , ), or We assume that the plant to be controlled is minimum-phase and that the CB product is Positive Definite and diagonalizable though not necessarily symmetric. As we showed, the plant is WASP according to Definition 2, so it satisfies conditions (22)- (23). Under these assumptions one can use the Lyapunov function (24). Differentiating (24) and using the W-passivity relations, finally leads to the following derivative of the Lyapunov function (Appendix A) One can see thatV (t) in (40) is negative definite with respect to e x (t), yet only negative semidefinite with respect to the entire statespace {e x (t), K(t)}. A direct result of Lyapunov stability theory is that all dynamic values are bounded. According to LaSalle's Invariance Principle , all state-variables and adaptive gains are bounded and the system ultimately ends within the domain defined byV (t) ≡ 0. BecauseV (t) is negative definite in e x (t), the system thus ends with e x (t) ≡ 0, that in turn implies e y (t) ≡ 0. In other words, the adaptive control system demonstrates asymptotic convergence of the state and output error and boundedness of the adaptive gains.
|
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> On gain convergence, basic conditions for stability, optimality, robustness, etc. <s> Model reference adaptive control procedures that do not require explicit parameter identification are considered for large structural systems. Although such applications have been shown to be feasible for mu Hi variable systems, provided there exists a feedback gain matrix which makes the resulting input/output transfer function strictly positive real, it is now shown that this constraint is overly restrictive and that only positive realness is required. Subsequent consideration of a simply supported beam shows that if actuators and sensors are collocated, then the positive realness constraint will be satisfied and the model reference adaptive control will then indeed be suitable for velocity following when only velocity sensors are available and for both position and velocity following when velocity plus scaled position outputs are measured. In both cases, all states, regardless of system dimension, are guaranteed to be stable. HE need for parameter estimation and/or adaptive control of any system arises because of ignorance of the system's internal structure and critical parameter values, as well as changing control regimes. A large structural system (LSS) is substantially more susceptible to these problems. The most crucial problem of adaptive control of large structures is that the plant is very large or infinite-dime nsional and, consequently, the adaptive controller must be based on a loworder model of the system in order to be implemented with an on-line/onboard computer. However, any controller based on a reduced-order model (ROM) must operate in closed loop with the actual system; thus it interacts not only with the ROM but also with the residual subsystem (through the spillover and model error terms). One particular adaptive algorithm that seems applicable to LSS is the direct (or implicit) model reference-based approach taken by Sobel et al. 1'2 In particular, using command generator tracker (CGT) theory,3 with Lyapunov stabilitybased design procedures, they were able to develop for step commands a model reference adaptive control (MRAC) algorithm that, without the need for parameter identification, forced the error between plant and model (which need not be of the same order as the plant) to approach zero, provided that certain plant/model structural conditions are satisfied. Such an adaptation algorithm is very attractive for the control of large structural systems since it eliminates the need for explicitly identifying the large number of modes that must be modeled, and, furthermore, eliminates the spillover effects. Relative to the conditions that must be satisfied, it was shown that asymptotic stability results provided that the plant input/output transfer matrix is strictly positive real for some feedback gain matrix and provided that there exists a bounded solution to the corresponding deterministic CGT problem. Such a solution, however, does not always exist for structural problems with velocity sensors and, furthermore, the transfer matrix for structural systems is positive real (not strictly positive real) for collocated actuators and rate sensors.4 <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> On gain convergence, basic conditions for stability, optimality, robustness, etc. <s> This paper addresses the problem of designing model-reference adaptive control for linear MIMO systems with unknown high-frequency gain matrix (HFGM). The concept of hierarchy of control is introduced leading to a new control parametrization and an error equation with triangular HFGM, which allows a sequential design of the adaptation scheme. Significant reduction of the prior knowledge about the HFGM is achieved, overcoming the limitations of the known methods. A complete stability and convergence analysis is developed based on a new class of signals and their properties. Exponential stability is guaranteed under explicit persistency of excitation conditions. <s> BIB002 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> On gain convergence, basic conditions for stability, optimality, robustness, etc. <s> Recent publications have presented successful implementations of simple direct adaptive control techniques in various applications. However, they also expose the fact that the convergence of the adaptive gains has remained uncertain. The gains may not converge to the ideal constant control gains predicted by the underlying linear time-invariant system considerations. As those prior conditions that were also needed for stability may not hold, this conclusion may raise doubts about the robustness of the adaptive system. This paper intends to show that the adaptive control performs perfect tracking even when the linear time-invariant solution does not exist. It is shown that the adaptation performs a ‘steepest descent’ minimization of the errors, ultimately ending with the appropriate set of control gains that fit the particular input command and initial conditions. The adaptive gains do asymptotically reach an appropriate set of bounded constant ideal gain values that solve the problem at task. Copyright © 2004 John Wiley & Sons, Ltd. <s> BIB003 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> On gain convergence, basic conditions for stability, optimality, robustness, etc. <s> Abstract Recent publications have shown that under some conditions continuous linear time-invariant systems become strictly positive real with constant feedback. This paper expands the applicability of this result to discrete linear systems. The paper shows the sufficient conditions that allow a discrete system to become stable and strictly passive via static (constant or nonstationary) output feedback. <s> BIB004 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> On gain convergence, basic conditions for stability, optimality, robustness, etc. <s> Recent publications have shown that under some conditions linear time-invariant systems become strictly positive real with constant feedback. To expand the applicability of this result to nonstationary and nonlinear systems, this paper first reviews a few pole-zero dynamics definitions in nonstationary systems and relates them to stability and passivity of the systems. The paper then shows the sufficient conditions that allow a system to become stable and strictly passive via static (constant or nonstationary) output feedback. Applications in robotics and adaptive control are also presented. <s> BIB005 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> On gain convergence, basic conditions for stability, optimality, robustness, etc. <s> In this paper, a Nonlinear Direct Model Reference Adaptive Control (NDMRAC) i s derived. The NDMRAC controller is compared to the Full State Feedback (FSFB) controller. Both of the controllers are applied to a rigid body spacecraft. To compare the controllers, the inertia matrix is suddenly changed in the simulation. Euler equations are used to estimate the evolution of the rigid body angular velocity and quaternions are used to describe the attitude position of the rigid body. The system is augmented or modified to account for the disturbances affecting the system under observation, so the NDMRAC control also implements a Direct Adaptive Disturbance Rejection (DADR) control which partially or fully eliminates the disturbance coming into the simulated system. The error of the system and the power spectrum density of the disturbance ar e used to analyze the performance of t he NDMRAC and DADR controllers. <s> BIB006
|
Some particularly interesting questions may arise during the proof of stability. First, although the Lyapunov function was carefully selected to contain both the state error and the adaptive gains, the derivative only contains the state error. It appears as if the successful proof of stability has "managed" to eliminate any possibly negative effect of the adaptive gains. One is then entitled to ask what positive role the adaptive gains play (besides not having negative effects). This is just one more illustration of the difficulties related to the analysis of nonlinear systems. Indeed, although Lyapunov stability theory manages to prove stability, it cannot and does not provide all answers. Besides, as potential counterexamples seem to show, although the tracking error and the derivative of the adaptive gains tend to vanish, this mere result does not necessarily imply, as one might have initially thought, that the adaptive gains would reach a constant value or even a limit at all. If the adaptive gain happens to be a function such as k(t) = sin(ln t) (suggested to us by Mark Balas), its derivative isk(t) = cos(ln t)/t. In this example one can see that although the derivative tends to vanish in time, the gain k(t) itself does not reach any limit at all. Therefore, the common opinion that seems to be accepted among experts is that the adaptive gains do not seem to converge unless the presence of some "sufficient" excitation can be guaranteed. This seem to imply that even in the most ideal, perfect following, situations, the adaptive control gains may continue wandering for ever. However, recent results have shown that these open questions and problems are only apparent. First, even if it is not a direct result of Lyapunov analysis, one can show that the adaptive control gains always perform a steepest descent minimization of the tracking error BIB003 . Although this "minimum" could still increase without bound in general, if the stability of the system were not guaranteed, yet this is not the case with SAC. Second, with respect to the final gain values, when one tests an adaptive controller with a given plant, one first assumes that an underlying LTI solution for the ideal control gains exists, and then the adaptive controller is supposed to find those gains at the end of the adaptation. If the plant is known, one can first solve the deterministic tracking problem and find the ideal control gains. Then, the designer proceeds with the implementation of the adaptive controller and expects it to converge to the pre-computed ideal solution. In practice, however, one observes that, even though the tracking errors do vanish, the adaptive gains do not seem to converge. , , besides very few exceptions they don't seem to be widely used in Adaptive Control or in nonlinear control systems in general. This fact could be partially explained by the fact that the results are of a very general character. However, their proper interpretation and application towards the development of new basic analysis tools such as combining a Modified Invariance Principle with Gromwall-Bellman Lemma BIB001 , BIB003 , , finally managed to provide the solution to this problem. It was shown that if the adaptive control gains do not reach the "unique" solution that the preliminary LTI design seemed to suggest, it is not because something was wrong with the adaptive controller, but rather because the adaptive control can go beyond the LTI design. The existence of a "general" LTI solution is useful in facilitating and shorting the proof of stability, yet it is not needed for the convergence of the adaptive controller. While the sought after stationary controller must provide a fixed set of constant gains that would fit any input commands, the adaptive controller only needs that specific set of control gains that correspond to the particular input command. Even in those cases when the general LTI solution does not exist, the particular solution that the adaptive controller needs does exist BIB003 . However, it complicates the stability analysis because it was shown that those particular solutions may allow perfect following only after a transient that adds supplementary terms to the differential equations of motion. As a consequence, the stability analysis may end with the derivative of Lyapunov function beinġ Although the derivative (41) still contains the negative definite term with respect to the error state, it also contains a transient term that is not negative, so the derivative is not necessarily negative definite or even semidefinite. Apparently, (41) cannot be used for any decision on stability. However, although the decision on stability is not immediate, the Modified Invariance Principle reveals that all bounded solutions of the adaptive system reach asymptotically the domain defined by Therefore, one must find out what those "bounded trajectories" are and it is the role of GromwallBellman Lemma to actually show that, under the WASP assumption, all trajectories are bounded. Therefore, the previous conclusions on asymptotically perfect tracking remain valid. Moreover, because the gains also reach that domain in space where perfect tracking is possible, this approach has also finally provided the answer to the (previously open) question on the adaptive gain convergence. Even if one assumes that the final asymptotically perfect tracking may occur while the adaptive gains continue to wander, one can show that the assumably nonstationary gains satisfy a linear differential equitation with constant coefficients and their solution is a summation of generalized exponential functions ( BIB003 and Appendix B). This partial conclusion immediately shows that such nonlinear "counterexample" gains as that we presented above are, maybe, nice and tough mathematical challenges, yet they cannot be solutions of, and thus are actually immaterial for, the SAC tracking problem. Furthermore, because the gains are bounded, they can only be combinations of constants and converging exponentials, so they must ultimately reach constant values. Therefore, we were finally able to show (at least within the scope of SAC) that the adaptive control gains do ultimately reach a set of stabilizing constant values at the end of a steepest descent minimization of the tracking error ( BIB003 and Appendix B). A recent paper tests SAC with a few counterexamples for the standard MRAC BIB002 . The paper shows that SAC not only maintains stability in all cases that led to instability with standard MRAC, but also demonstrates very good performance. Many practitioners that have tried it have been impressed with the ease of implementation of SAC and with its performance even in large and complex applications. Many examples seem to show that SAC maintains its stable operation even in cases when the established sufficient conditions do not hold. Indeed, conditions for stability of SAC have been continuously mitigated over the years, as the two successive definitions of almost passivity conditions presented in this paper may show. In order to get another qualitative estimate on SAC robustness, assume that instead of (1)- (2) the actual plant iṡ Assume that the nominal {A, B, C} system is WASP, while f (x) is some (linear or nonlinear) component that prevents the satisfaction of the passivity conditions. If one uses the same Lyapunov functions (24), instead of (40) one gets for the stabilization probleṁ and for the tracking probleṁ (46) where x * is the ideal trajectory, as defined in Appendix A. Note that the derivative of the Lyapunov function remains negative definite in terms of x(t) or e x (t), correspondingly, if the second term in the sum is not too large, as defined (for example) by the inequality BIB006 While until very recently the main effort has been dedicated to the clarification and relaxation of the passivity conditions, similar effort is dedicated now to clarifying the limits of robustness of SAC when the basic passivity conditions are not entirely satisfied. Besides, although much effort has been dedicated to clarification of passivity concepts in the context of Adaptive Control of stationary continuous-time systems, similar effort has been dedicated to extending these concepts to discretetime BIB004 and nonstationary and nonlinear systems BIB005 , .
|
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> PARALLEL FEEDFORWARD AND STABILITY OF SIMPLE ADAPTIVE CONTROL <s> ABSTRACT Simple adaptive control systems were recently shown to be globally stable and to maintain robustness with disturbances if the controlled system is “almost strictly positive real” namely, if there exists a constant output feedback (unknown and not needed for implementation) such that the resulting closed loop transfer function is strictly positive real. In this paper it is shown how to use parallel feedforward and the stabi 1izability properties of systems in order to satisfy the “almost positivity” condition. The feedforward configuration may be constant, if some prior knowledge is given, or adaptive, in general. This way, simple adaptive controllers can be implemented in a large number of complex control systems, without requiring the order of the plant or the pole-excess as prior knowledge. <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> PARALLEL FEEDFORWARD AND STABILITY OF SIMPLE ADAPTIVE CONTROL <s> This paper deals with two problems for the improvement of the control performance of simple adaptive control (SAC) techniques. First, it is discussed that the introduction of a robust adaptive control term much robustifies the SAC system concerning plant uncertainties such as state dependent disturbance. Second, a practical procedure is described for designing the parallel feedforward compensator, which is necessary for the actual realization of the SAC system, given prior information concerning the plant such that: (1) the plant is minimum phase; (2) an upper bound on the relative degree exists; and (3) approximate values of high and low frequency gains are known. The effectiveness of the proposed methods is confirmed through the simulation of typical examples of adaptive control systems. <s> BIB002 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> PARALLEL FEEDFORWARD AND STABILITY OF SIMPLE ADAPTIVE CONTROL <s> This paper presents theory for stability analysis and design for a class of observer-based feedback control systems. Relaxation of the controllability and observability conditions imposed in the Yakubovich-Kalman-Popov (YKP) lemma can be made for a class of nonlinear systems described by a linear time-invariant system (LTI) with a feedback-connected cone-bounded nonlinear element. It is shown how a circle-criterion approach can be used to design an observer-based state feedback control which yields a closed-loop system with specified robustness characteristics. The approach is relevant for design with preservation of stability when a cone-bounded nonlinearity is introduced in the feedback loop. Important applications are to be found in nonlinear control with high robustness requirements. <s> BIB003 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> PARALLEL FEEDFORWARD AND STABILITY OF SIMPLE ADAPTIVE CONTROL <s> Ar ecent publication uses a difficult design example to show that fuzzy logic might have advantages when compared with classical compensators. Although in this particular case the application was shown to be successful, convergence of the fuzzy-logic algorithm, as well as other time-varying controllers, cannot not be guaranteed unless some preliminary conditions are satisfied. It will be shown that further exploitation of the classical design can improve robust performance. This result is then used to create sufficient conditions that guarantee convergence with time-varying controllers, and it is then shown that simple adaptive control methods can further improve performance and maintain it in changing environments. I. Introduction A RECENT publication 1 has presented successful applications of fuzzy-logic control design in a nonminimum phase autopilot with uncertainty of parameters. The authors use this difficult design case to show that fuzzy logic has advantages when compared with a classical compensator or with the ubiquitous proportional‐ integral‐derivative (PID) design when uncertainty is concerned. Although this particular fuzzy-logic application was successful, it is well known that convergence with nonstationary controllers, including adaptive and fuzzy-logic algorithms, is not inherently guaranteed. This paper intends to show that further exploitation of the basic knowledge of the plant and the uncertainty can be used to improve the performance of a classical control design and also to create sufficient conditions that guarantee convergence of time-varying controllers. The results are presented here in connection with simple adaptive control that is shown to achieve improved performance along with the guarantee of stability. Successful implementations of simple direct adaptive control techniques in various domains of application have been presented over the past two decades in the technical literature. This simpleadaptive-control (SAC) methodology has been introduced by Sobel et al. 2 and further developed by Barkana et al. 3 and Barkana and Kaufman. 4,5 These techniques have also been extended by Wenn and Balas 6 and Balas 7 to infinite-dimensional systems. Those successful applications of low-order adaptive controllers to large-scale examples have led to successful implementations of SAC in such diverse applications as flexible structures, 8−15 flight control, 16,17 <s> BIB004
|
Using for illustration the example of Section VIII, assume that K M AX = 2.5 is an estimate of the highest admissible constant gain that maintains stability of the system. One would never use this value because it would not be a good control gain value. Indeed, we only use the mere knowledge that a (fictitious) closed-loop system using the high gain value of 2.5 would still be stable. Instead of implementing constant output feedback we use this knowledge in order to augment the system with a simple Parallel Feedforward Configuration (PFC) across the plant. If the original plant has transfer function the closed-loop system would be and would be asymptotically stable. The augmented system using the inverse of the stabilizing and if the closed-look system would be stable, one can see that the augmented system is minimumphase (Figure 8 ). Note that although we would never suggest using direct input-output gains in parallel with the plant, this is a simple and useful illustration that may facilitate the understanding of the basic idea. Also, although in this paper we only dealt (and will continue to deal) with strictly causal systems, for this specific case it is useful to recall that a minimum-phase plant with relative degree 0 (zero) is also ASPR. As (53) shows, one could use the inverse of any stabilizing gain in order to get ASPR configurations. However, any such addition is added ballast to the original plant output, so using the inverse of the maximal allowed gain adds the minimal possible alteration to the plant output. The augmented system looks as follows: The augmented system has three poles and three zeros and all zeros are minimum-phase. Such a system cannot become unstable, no matter how large the constant gain k becomes, yet because it is ASPR one can also show that it would also stay stable no matter how large the nonstationary adaptive gain k(t) becomes. One can easily see that the parallel feedforward has made the effective control gain that affects the plant to be: One can see that the effective gain is always below the maximal admissible constant gain (Figure 9 ). While this qualitative demonstration intends to provide some intuition to the designer that is used to estimate stability in terms of gain and phase margins, rigorous proofs of stability using the Lyapunov-LaSalle techniques and almost passivity conditions are also available and provide the necessary rigorous proof of stability. As we already mentioned above, the constant parallel feedforward has only been presented here for a first intuitive illustration. In practice, however, one does not want to use direct input-output across the plant that would require solving implicit loops that include the adaptive gain com- putations. Therefore, we go to the next step that takes us to the ubiquitous PD controllers. In practice, many control systems use some form of PD controller, along with other additions that may be needed to improve performance. While the additions are needed to achieve the desired performance, in many cases the PD controller alone is sufficient to stabilize the plant. In our case, a PD controller H(s) would make the Root-locus plot to look like (Figure 10 ) The system is asymptotically stable for any fixed gain within the "admissible" range 0 -2.66, so we again choose K M AX = 2.5 as an estimate of the highest admissible constant gain that maintains stability of the system. This time however we use D(s) = 1/H(s), the inverse of the PD controller, as the parallel feedforward across the plant. The Root-locus of the resulting augmented plant is shown in Figure 11 . This is a strictly causal system with 4 poles and 3 strictly minimum-phase zeros and is therefore, ASPR. Although the original plant was non-minimum phase and this fact would usually forbid using adaptive controllers, here one can apply SAC and be sure that stability and asymptotically perfect tracking of the augmented system is guaranteed. The only open question is how well the actual plant output performs. In this respect, the maximal admissible gain with fictitious PD (or with any other fictitious controller) defines how small the added ballast is and how close the actual output is to the augmented output. The example here is a very bad system and was only used to illustrate the problems one may encounter using constant gain in changing environments and cannot be expected to result in good behavior without performing much more study and basic control design. The examples above have been used to present a simple principle: if the system can be stabilized by the controller H(s), then the augmented system G a (s) = G(s) + H −1 (s) is minimum-phase. Proper selection of the relative degree of H −1 (s) will thus render the augmented system ASPR BIB001 . This last statement implies that "passivability" of systems is actually dual to stabilizability. If a stabilizing controller is known, its inverse in parallel with the plant can make the augmented system ASPR. When sufficient prior knowledge is available to design a stabilizing controller, some researchers prefer to use this knowledge and directly design the corresponding parallel feedforward BIB002 or "shunt" . When the "plant" is a differential equation, it is easy to assume that the order or the relative degree is available and then a stabilizing controller or the parallel feedforward can be implemented. However, in real world, where the "plant" could be a plane, a flexible structure or a ship, the available knowledge is the result of some wind-tunnel or other experimental tests that may result in some approximate frequency response or approximate modeling, sufficient to allow some control design, yet in general do not provide reliable knowledge on the order or relative degree of the real plant. On the other hand (although it may very much want some adaptive control to help improving performance if it only could be trusted), the control community actually continues to control real-world systems with fixed controllers. Therefore, in our opinion the question "How can you find a stabilizing controller?" should not be given any excessive emphasis. In any case, if there is sufficient prior knowledge to directly design the feedforward there is definitely sufficient information to design a stabilizing configuration, and vice versa. Note that the example of this section is a bad system that was on purpose selected to provide a counterexample for the stability with assumably "constant" gains. Although the stability of the augmented system with adaptive control is guaranteed, the plant output may not behave very well, even with the added parallel feedforward. In any case, even in those cases when the parallel feedforward is too large to allow good performance as monitored at the actual plant output, the behavior of the, possibly both unstable and non-minimum phase, plant within the augmented system is stable and it was shown to allow stable identification schemes BIB003 and thus, lead to better understanding of the plant towards better, adaptive or non-adaptive, control design. Still, as recently shown with a non-minimumphase UAV example BIB004 and with many other realistic examples , prior knowledge usually available for design allows using basic preliminary design and then very small additions to the plant that not only result in robust stability of the adaptive control system even with originally non-minimum phase plants, but that also lead to performance that is ultimately superior to other control methodologies. A recent publication uses the parallel feedforward compensator for safe tuning of MIMO Adaptive PID Controllers and another shows how to implement Simple Adaptive Controllers with guaranteed H ∞ performance.
|
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> ROBUSTNESS OF SIMPLE ADAPTIVE CONTROL WITH DISTURBANCES <s> 1. Introduction.- 2. Continuous-time identifiers and adaptive observers.- 3. Discrete-time identifiers.- 4. Robustness improvement of identifiers and adaptive observers.- 5. Adaptive control in the presence of disturbances.- 6. Reduced-order adaptive control.- 7. Decentralized adaptive control.- 8. Reduced order-decentralized adaptive control.- Corrections. <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> ROBUSTNESS OF SIMPLE ADAPTIVE CONTROL WITH DISTURBANCES <s> Recent publications have presented successful implementations of simple direct adaptive control techniques in various applications. However, they also expose the fact that the convergence of the adaptive gains has remained uncertain. The gains may not converge to the ideal constant control gains predicted by the underlying linear time-invariant system considerations. As those prior conditions that were also needed for stability may not hold, this conclusion may raise doubts about the robustness of the adaptive system. This paper intends to show that the adaptive control performs perfect tracking even when the linear time-invariant solution does not exist. It is shown that the adaptation performs a ‘steepest descent’ minimization of the errors, ultimately ending with the appropriate set of control gains that fit the particular input command and initial conditions. The adaptive gains do asymptotically reach an appropriate set of bounded constant ideal gain values that solve the problem at task. Copyright © 2004 John Wiley & Sons, Ltd. <s> BIB002
|
The presentation so far showed that a simple adaptive controller can guarantee stability of any system that is minimum-phase if the CB product is Positive Definite and diagonalizable if not symmetric. In case these conditions do not inherently hold, basic knowledge on the stabilizability properties of the plant, usually known, can be used to fulfill them via Parallel Feedforward Configurations. Therefore, the proposed methodology seems to fit almost any case where asymptotically perfect output tracking is possible. However, after we presented the eulogy of the adaptive output feedback gain (32), it is about time to also present what could become its demise, if not properly treated. When persistent disturbances such as random noise or very high frequency vibrations are present, perfect tracking is not possible. Even when the disturbance is known and various variations of the Internal Model Principle can be devised to filter them out, some residual tracking error may always be present. While tracking with small final errors could be acceptable, it is clear that the adaptive gain term (32) would, slowly but certainly, increase without limit. Indeed, theoretically, ASPR systems maintain stability with arbitrarily high gains and in some cases (in case of missiles, for example) the adaptive system mission could end even before problems are even observed. However, allowing the build-up of high gains that do not come in response to any actual requirement is not acceptable, because in practice they may lead to numerical problems and saturation effects. However, very early we observed how the robustness of SAC with disturbances can be guaranteed by adding Ioannou's σ-term BIB001 with the error adaptive gain that would now bė Finally, this new addition is literally making SAC an adaptive controller (see BIB002 and and references therein): while the control gains always perform a steepest descent minimization of the tracking error, the error gain defined in (55) goes up-and-down fitting the right gain to the right situation in accord with the changing operational needs.
|
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> Appendix A. PROOF OF STABILITY <s> Recent publications have presented successful implementations of simple direct adaptive control techniques in various applications. However, they also expose the fact that the convergence of the adaptive gains has remained uncertain. The gains may not converge to the ideal constant control gains predicted by the underlying linear time-invariant system considerations. As those prior conditions that were also needed for stability may not hold, this conclusion may raise doubts about the robustness of the adaptive system. This paper intends to show that the adaptive control performs perfect tracking even when the linear time-invariant solution does not exist. It is shown that the adaptation performs a ‘steepest descent’ minimization of the errors, ultimately ending with the appropriate set of control gains that fit the particular input command and initial conditions. The adaptive gains do asymptotically reach an appropriate set of bounded constant ideal gain values that solve the problem at task. Copyright © 2004 John Wiley & Sons, Ltd. <s> BIB001
|
The underlying deterministic tracking problem assumes that there exists an "ideal control" that could keep the plant along an "ideal trajectory" that performs perfect tracking. In other words, the ideal planṫ moves along "ideal trajectories" such that We assume that the underlying LTI problem is solvable and thus, that some ideal gains K x and K u exist BIB001 . Because the plant and the model can have different dimensions, the "following error" is defined to be the difference between the ideal and the actual plant state .5) and correspondingly Differentiating e x (t) gives: Adding and subtracting B K e e y (t) = B K e Ce x (t) above giveṡ (A.10) where for convenience we denoted The derivative of the Lyapunov function (24) iṡ Using relations (22)- (23) giveṡ The last two terms in (A.13), originating in the derivative of the adaptive gain terms in V(t), cancel the previous, possibly troubling, non-positive, terms and thus lead to the Lyapunov derivative Appendix B. GAIN CONVERGENCE Let the linear time-invariant plant (1)-(2) track the output of the model (3)-(4). In general, in the past we have assumed that the model uses step inputs to generate the desired command BIB001 . For the more general command following case, we assume that the command itself is generated by an unknown input generatoṙ We want to check what the ultimate adaptive control gains that perform perfect tracking could be. When the error is zero, the input control to the plant is a linear combination of available measures. Assume that the plant moves along such "ideal trajectories" and the nonstationary gains are such that the plant output y * (t) = Cx * (t) perfectly tracks the model output, namely, e y (t) = 0, or We assume that CA is maximal rank and get Here, and x * 0 (t) represents those functions that satisfy B.13) and are solutions of the plant differential equatioṅ that result iṅ Note that the differential equation (B.14) of the supplementary term x * 0 (t) does not contain control terms because those would be included in the other terms in (B.10). Because CB is nonsingular one gets from (B.4) Here The terms in x * 0 (t) andẋ * 0 (t) cancel each other and we get and finally Here, We first consider the case when the signals x m (t) and x m (t) are "sufficiently rich" so the equations can be separated and the differential equations of S x (t) and S u (t) arė is a stable linear differential equation with constant coefficients. Therefore, is given by a combination of exponential function and ultimately reaches a constant limit, S f . However, it now implies that only a linear combination of the ultimate gains satisfies a relation of the form S x (t)E = S u (t)C u = S f (B.49) Similarly only a linear combination of the ultimate adaptive control gains satisfies a relation of the form While any set of constant gains that satisfy (B.50) would perform perfect tracking, nonstationary gains could also do. Moreover, in order to simplify the equation and show that it has solutions, we only considered those particular solutions that satisfy (B.49), yet the selection is almost arbitrary, and the equation M (t)x u (t) = 0 (B.51) has many more solutions than (B.48), in general. Therefore, any effort of proving ultimate convergence of the adaptive gains actually seems to end in failure. There is no doubt that, in principle, perfect tracking can occur while the bounded timevarying gains keep wandering across some hypersurface described, for example, by (B.50) or by any corresponding equation. However, although such solutions for the perfect tracking exist, one may still ask whether those nonstationary gains can be the ultimate values of the adaptation process. Can the steepest descent minimization end with some ever wandering gains? As we conclude below, most certainly, not. First, although it is hard to translate engineering intuition into rigorous mathematics, it is "felt" that the lack of "richness" that the perfect following equation shows does not express the "richness" of signals that exists during the entire process of adaptation up to and until "just before" perfect tracking. Yet, somewhat more rigorously, the same argument that seems to fail the Lyapunov-LaSalle approach can now be used to redeem it. Along with equation ( As the errors ultimately vanish and the monotonically increasing output gain K e (t) reaches an ideal stabilizing gain value, the adaptive control gains are located on the hyper-ellipsoid defined by B.54) with the set K x , K u at its center. Because any set of constant gains that satisfy the perfect tracking equation can play the role of ideal gains set that is used in the Lyapunov function, choosing the set K x1 , K u1 finds the final gain on hyperellipsoid with the center in K x 1 , K u 1 , namely, However, assuming the fictitious set K x 2 , K u 2 finds the final gain on a different hyper-ellipsoid with the center in K x2 , K u2 . Therefore, for the same adaptation process, that starts and ends with the same values, this thinking experiment finds the final gains located at the intersection of infinitely many distinct hyper-ellipsoids, so their common intersection is a point or a "line" of Lebesque measure zero. Although this argument may requires more polishing, it points to the fact that, ultimately, the adaptive gains do converge to a limit. In some cases, the rate of convergence may be slow and simulations may show the gain varying for a long-long time. Hence, it is important to know the gains do not vary at random and that, even if sometimes slowly, they certainly tend to reach their final bounded constant limit.
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> Hand and Mind: What Gestures Reveal about Thought. David McNeill. Chicago and London: University of Chicago Press, 1992. 416 pp. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> Preface Prologue: General introduction: Animal minds, human minds Kathleen Gibson A history of speculation on the relation between tools and language Gordon Hewes Part I. Word, Sign and Gesture: General introduction: Relations between visual-gestural and vocal-auditory modalities of communication Tim Ingold 1. Human gesture Adam Kendon 2. When does gesture become language? Susan Goldwin-Meadow 3. The emergence of language Sue Savage-Rumbaugh and Duane Rumbaugh 4. A comparative approach to language parallels Charles Snowdon Part II. Technological Skills and Associated Social Behaviors of the Non-Human Primates: Introduction: Generative interplay between technical capacities, social relations, imitation and cognition Kathleen Gibson 5. Capuchin monkeys Elisabetta Visalberghi 6. The intelligent use of tools William McGrew 7. Aspects of transmission of tool use in wild chimpanzees Christophe Boesch Part III. Connecting Up The Brain: Introduction: Overlapping neural control of language, gesture and tool use Kathleen Gibson 8. Disorders of language and tool use Daniel Kempler 9. Sex differences in visuospatial skills Dean Falk 10. The unitary hypothesis William H. Calvin 11. Tool use, language and social behaviour in relationship to information processing capacities Kathleen Gibson Part IV. Perspectives on Development: Introduction: Beyond neotony and recapitulation Kathleen Gibson 12. Human language development and object manipulation Andrew Lock 13. Comparative cognitive development Jonas Langer 14. Higher intelligence, propositional language and culture as adaptations for planning Sue Parker and Constance Milbrath Part V. Archaeological and Anthropological Perspectives: Introduction: Tools, techniques and technology Tim Ingold 15. Early stone industries and inferences regarding language and cognition Nicholas Toth and Kathy Schick 16. Tools and language in human evolution Iain Davidson and William Noble 17. Layers of thinking in tool behaviour Thomas Wynn 18. The complementation theory of language and tool use Peter Reynolds 19. Tool-use, sociality and intelligence Tim Ingold Epilogue: Technology, language, intelligence Tim Ingold Index. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient "purposive" approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of human-computer interaction. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> This paper describes a dialogue system based on the recognition and synthesis of Japanese sign language. The purpose of this system is to support conversation between people with hearing impairments and hearing people. The system consists of five main modules: sign-language recognition and synthesis, voice recognition and synthesis, and dialogue control. The sign-language recognition module uses a stereo camera and a pair of colored gloves to track the movements of the signer, and sign-language synthesis is achieved by regenerating the motion data obtained by an optical motion capture system. An experiment was done to investigate changes in the gaze-line of hearing-impaired people when they read sign language, and the results are reported. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> A person stands in front of a large projection screen on which is shown a checked floor. They say, "Make a table," and a wooden table appears in the middle of the floor."On the table, place a vase," they gesture using a fist relative to palm of their other hand to show the relative location of the vase on the table. A vase appears at the correct location."Next to the table place a chair." A chair appears to the right of the table."Rotate it like this," while rotating their hand causes the chair to turn towards the table."View the scene from this direction," they say while pointing one hand towards the palm of the other. The scene rotates to match their hand orientation.In a matter of moments, a simple scene has been created using natural speech and gesture. The interface of the future? Not at all; Koons, Thorisson and Bolt demonstrated this work in 1992 [23]. Although research such as this has shown the value of combining speech and gesture at the interface, most computer graphics are still being developed with tools no more intuitive than a mouse and keyboard. This need not be the case. Current speech and gesture technologies make multimodal interfaces with combined voice and gesture input easily achievable. There are several commercial versions of continuous dictation software currently available, while tablets and pens are widely supported in graphics applications. However, having this capability doesn't mean that voice and gesture should be added to every modeling package in a haphazard manner. There are numerous issues that must be addressed in order to develop an intuitive interface that uses the strengths of both input modalities.In this article we describe motivations for adding voice and gesture to graphical applications, review previous work showing different ways these modalities may be used and outline some general interface guidelines. Finally, we give an overview of promising areas for future research. Our motivation for writing this is to spur developers to build compelling interfaces that will make speech and gesture as common on the desktop as the keyboard and mouse. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> We present a statistical approach to developing multimodal recognition systems and, in particular, to integrating the posterior probabilities of parallel input signals involved in the multimodal system. We first identify the primary factors that influence multimodal recognition performance by evaluating the multimodal recognition probabilities. We then develop two techniques, an estimate approach and a learning approach, which are designed to optimize accurate recognition during the multimodal integration process. We evaluate these methods using Quickset, a speech/gesture multimodal system, and report evaluation results based on an empirical corpus collected with Quickset. From an architectural perspective, the integration technique presented offers enhanced robustness. It also is premised on more realistic assumptions than previous multimodal systems using semantic fusion. From a methodological standpoint, the evaluation techniques that we describe provide a valuable tool for evaluating multimodal systems. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> The ability to recognize humans and their activities by vision is key for a machine to interact intelligently and effortlessly with a human-inhabited environment. Because of many potentially important applications, “looking at people” is currently one of the most active application domains in computer vision. This survey identifies a number of promising applications and provides an overview of recent developments in this domain. The scope of this survey is limited to work on whole-body or hand motion; it does not include work on human faces. The emphasis is on discussing the various methodologies; they are grouped in 2-D approaches with or without explicit shape models and 3-D approaches. Where appropriate, systems are reviewed. We conclude with some thoughts about future directions. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> Humans detect and interpret faces and facial expressions in a scene with little or no effort. Still, development of an automated system that accomplishes this task is rather difficult. There are several related problems: detection of an image segment as a face, extraction of the facial expression information, and classification of the expression (e.g., in emotion categories). A system that performs these operations accurately and in real time would form a big step in achieving a human-like interaction between man and machine. The paper surveys the past work in solving these problems. The capability of the human visual system with respect to these problems is discussed, too. It is meant to serve as an ultimate goal and a guide for determining recommendations for development of an automatic facial expression analyzer. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> The research topic of looking at people, that is, giving machines the ability to detect, track, and identify people and more generally, to interpret human behavior, has become a central topic in machine vision research. Initially thought to be the research problem that would be hardest to solve, it has proven remarkably tractable and has even spawned several thriving commercial enterprises. The principle driving application for this technology is "fourth generation" embedded computing: "smart" environments and portable or wearable devices. The key technical goals are to determine the computer's context with respect to nearby humans (e.g., who, what, when, where, and why) so that the computer can act or respond appropriately without detailed instructions. The paper examines the mathematical tools that have proven successful, provides a taxonomy of the problem domain, and then examines the state of the art. Four areas receive particular attention: person identification, surveillance/monitoring, 3D methods, and smart rooms/perceptual user interfaces. Finally, the paper discusses some of the research challenges and opportunities. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> An information kiosk with a JSL (Japanese sign language) recognition system that allows hearing-impaired people to easily search for various kinds of information and services was tested in a government office. This kiosk system was favorably received by most users. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> In this paper, we describe HandTalker: a system we designed for making friendly communication reality between deaf people and normal hearing society. The system consists of GTS (Gesture/Sign language To Spoken language) part and STG (Spoken language To Gesture/Sign language) part. GTS is based on the technology of sign language recognition, and STG is based on 3D virtual human synthesis. Integration of the sign language recognition and 3D virtual human techniques greatly improves the system performance. The computer interface for deaf people is data-glove, camera and computer display, and the interface for hearing-abled is microphone, keyboard, and display. HandTalker now can support no domain limited and continuously communication between deaf and hearing-abled Chinese people. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> Research on recognition and generation of signed languages and the gestural component of spoken languages has been held back by the unavailability of large-scale linguistically annotated corpora of the kind that led to significant advances in the area of spoken language. A major obstacle has been the lack of computational tools to assist in efficient analysis and transcription of visual language data. Here we describe SignStream, a computer program that we have designed to facilitate transcription and linguistic analysis of visual language. Machine vision methods to assist linguists in detailed annotation of gestures of the head, face, hands, and body are being developed. We have been using SignStream to analyze data from native signers of American Sign Language (ASL) collected in our new video collection facility, equipped with multiple synchronized digital video cameras. The video data and associated linguistic annotations are being made publicly available in multiple formats. <s> BIB012 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> Sign language is characterized by its interactivity and multimodality, which cause difficulties in data collection and annotation. To address these difficulties, we have developed a video-based Japanese sign language (JSL) corpus and a corpus tool for annotation and linguistic analysis. As the first step of linguistic annotation, we transcribed manual signs expressing lexical information as well as non-manual signs (NMSs) including head movements, facial actions, and posture that are used to express grammatical information. Our purpose is to extract grammatical rules from this corpus for the sign-language translation system underdevelopment. From this viewpoint, we will discuss methods for collecting elicited data, annotation required for grammatical analysis, as well as corpus tool required for annotation and grammatical analysis. As the result of annotating 2800 utterances, we confirmed that there are at least 50 kinds of NMSs in JSL, using head (seven kinds), jaw (six kinds), mouth (18 kinds), cheeks (one kind), eyebrows (four kinds), eyes (seven kinds), eye gaze (two kinds), bydy posture (five kinds). We use this corpus for designing and testing an algorithm and grammatical rules for the sign-language translation system underdevelopment. <s> BIB013 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> We have created software for automatic synthesis of signing animations from the HamNoSys transcription notation. In this process we have encountered certain shortcomings of the notation. We describe these, and consider how to develop a notation more suited to computer animation. <s> BIB014 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> Inspired by the Defense Advanced Research Projects Agency's (DARPA) previous successes in speech recognition, we introduce a new task for sign language recognition research: a mobile one-way American sign language translator. We argue that such a device should be feasible in the next few years, may provide immediate practical benefits for the deaf community, and leads to a sustainable program of research comparable to early speech recognition efforts. We ground our efforts in a particular scenario, that of a deaf individual seeking an apartment and discuss the system requirements and our interface for this scenario. Finally, we describe initial recognition results of 94% accuracy on a 141 sign vocabulary signed in phrases of fours signs using a one-handed glove-based system and hidden Markov models (HMMs). <s> BIB015
|
I N taxonomies of communicative hand/arm gestures, sign language (SL) is often regarded as the most structured of the various gesture categories. For example, different gesture categories have been considered as existing on a continuum, where gesticulation that accompanies verbal discourse is described as the least standardized and SL as the most constrained in terms of conventional forms that are allowed by the rules of syntax ( , BIB001 , Fig. 1a ). In Quek's taxonomy ( , Fig. 1b) , gestures are divided into acts and symbols, and SL is regarded as largely symbolic, and possibly also largely referential since modalizing gestures are defined as those occuring in conjunction with another communication mode, such as speech. In this view, SL appears to be a small subset of the possible forms of gestural communication. Indeed SL is highly structured and most SL gestures are of a symbolic nature (i.e., the meaning is not transparent from observing the form of the gestures), but these taxonomies obscure the richness and sophistication of the medium. SL communication involves not only hand/arm gestures (i.e., manual signing) but also nonmanual signals (NMS) conveyed through facial expressions, head movements, body postures and torso movements. Recognizing SL communication therefore requires simultaneous observation of these disparate body articulators and their precise synchronization, and information integration, perhaps utilizing a multimodal approach ( BIB005 , BIB006 ). As such, SL communication is highly complex and understanding it involves a substantial commonality with research in machine analysis and understanding of human action and behavior; for example, face and facial expression recognition , BIB008 , tracking and human motion analysis BIB007 , , and gesture recognition BIB003 . Detecting, tracking and identifying people, and interpreting human behavior are the capabilities required of pervasive computing and wearable devices in applications such as smart environments and perceptual user interfaces , BIB009 . These devices need to be context-aware, i.e., be able to determine their own context in relation to nearby objects and humans in order to respond appropriately without detailed instructions. Many of the problems and issues encountered in SL recognition are also encountered in the research areas mentioned above; the structured nature of SL makes it an ideal starting point for developing methods to solve these problems. Sign gestures are not all purely symbolic, and some are in fact mimetic or deictic (these are defined by Quek as act gestures where the movements performed relate directly to the intended interpretation). Mimetic gestures take the form of pantomimes and reflect some aspect of the object or activity that is being referred to. These are similar to classifier signs in American Sign Language (ASL) which can represent a particular object or person with the handshape and then act out the movements or actions of that object. Kendon BIB002 described one of the roles of hand gesticulations that accompany speech as providing images of the shapes of objects, spatial relations between objects or their paths of movement through space. These are in fact some of the same functions of classifier signs in ASL. A form of pantomime called constructed actions (role-playing or pespective shifting ) is also regularly used in SL discourse to relate stories about other people or places. Deictic or pointing gestures are extensively used in SL as pronouns or to specify an object or person who is present or to specify an absent person by pointing to a previously established referrant location. Hence, designing systems that can automatically recognize classifier signs, pointing gestures, and constructed actions in signing would be a step in the direction of analyzing gesticulation accompanying speech and other less structured gestures. SL gestures also offer a useful benchmark for evaluating hand/arm gesture recognition systems. Non-SL gesture recognition systems often deal with small, limited vocabularies which are defined to simplify the classification task. SL(s), on the other hand, are naturally developed languages as opposed to artificially defined ones and have large, well-defined vocabularies which include gestures that are difficult for recognition systems to disambiguate. One of the uses envisioned for SL recognition is in a signto-text/speech translation system. The complete translation system would additionally require machine translation from the recognized sequence of signs and NMS to the text or speech of a spoken language such as English. In an ideal system, the SL recognition module would have a large and general vocabulary, be able to capture and recognize manual information and NMS, perform accurately in realtime and robustly in arbitrary environments, and allow for maximum user mobility. Such a translation system is not the only use for SL recognition systems however, and other useful applications where the system requirements and constraints may be quite different, include the following: . Translation or complete dialog systems for use in specific transactional domains such as government offices, post offices, cafeterias, etc. , BIB015 , BIB010 , BIB004 . These systems may also serve as a user interface to PCs or information servers . Such systems could be useful even with limited vocabulary and formulaic phrases, and a constrained data input environment (perhaps using direct-measure device gloves BIB011 , BIB010 or colored gloves and constrained background for visual input ). . Bandwidth-conserving communication between signers through the use of avatars. Sign input data recognized at one end can be translated to a notational system (like HamNoSys) for transmission and synthesized into animation at the other end of the channel. This represents a great saving in bandwidth as compared to transmitting live video of a human signer. This concept is similar to a system for computer-generated signing developed under the Visicast project ( BIB014 ) where text content is translated to SiGML (Signing Gesture Markup Language, based on HamNoSys) to generate parameters for sign synthesis. Another possibility is creating SL documents for storage of recognized sign data in the form of sign notations, to be played back later through animation. . Automated or semiautomated annotation of video databases of native signing. Linguistic analyses of signed languages and gesticulations that accompany speech require large-scale linguistically annotated corpora. Manual transcription of such video data is time-consuming, and machine vision assisted annotation would greatly improve efficiency. Head tracking and handshape recognition algorithms BIB012 , and sign word boundary detection algorithms BIB013 have been applied for this purpose. . Input interface for augmentative communication systems. Assistive systems which are used for human-human communication by people with speech-impairments often require keyboard or joystick input from the user [14] . Gestural input involving some aspects of SL, like handshape for example, might be more user friendly. In the following, Section 2 gives a brief introduction to ASL, illustrating some aspects relevant to machine analysis. ASL is extensively used by the deaf communities of North America and is also one of the most well-researched among sign languages-by sign linguists as well as by researchers in machine recognition. In Section 3, we survey work related to automatic analysis of manual signing. Hand localization and tracking, and feature extraction in vision-based methods are considered in Sections 3.1 and 3.2, respectively. Classification schemes for sign gestures are considered in Section 3.3. These can be broadly divided into schemes that use a single classification stage or those that classify components of a gesture and then integrate them for sign classification. Section 3.3.1 considers classification methods employed to classify the whole sign or to classify its components. Section 3.3.2 considers methods that integrate component-level results for sign-level classification. Finally, Section 3.4 discusses the main issues involved in classification of sign gestures. Analysis of NMS is examined in Section 4. The issues are presented in Section 4.1 together with works on body pose and movement analysis, while works related to facial expression analysis, head pose, and motion analysis are examined in Appendix D (which can be found at www.computer.org/publications/dlib). The integration of these different cues is discussed in Section 4.2. Section 5 summarizes the state-of-the-art and future work, and Section 6 concludes the paper.
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> AMERICAN SIGN LANGUAGE-ISSUES RELEVANT TO AUTOMATIC RECOGNITION <s> Hand and Mind: What Gestures Reveal about Thought. David McNeill. Chicago and London: University of Chicago Press, 1992. 416 pp. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> AMERICAN SIGN LANGUAGE-ISSUES RELEVANT TO AUTOMATIC RECOGNITION <s> We are currently developing a vision-based sign language recognition system for mobile use. This requires operability in different environments with a large range of possible users, ideally under arbitrary conditions. In this paper, the problem of finding relevant information in single-view image sequences is tackled. We discuss some issues in low level image cues and present an approach for the fast detection of a signing persons hands. This is achieved by using a modified generic skin color model combined with pixel level motion information, which is obtained from motion history images. The approach is demonstrated with a watershed segmentation algorithm. <s> BIB002
|
Most research work in SL recognition has focused on classifying the lexical meaning of sign gestures. This is understandable since hand gestures do express the main information conveyed in signing. For example, from obser- BIB001 and (b) Quek's taxonomy . ving the hand gestures in the sequence of Fig. 2 , we can decipher the lexical meaning conveyed as "YOU STUDY." BIB002 However, without observing NMS and inflections in the signing, we cannot decipher the full meaning of the sentence as: "Are you studying very hard?" The query in the sentence is expressed by the body leaning forward, head thrust forward and raised eyebrows toward the end of the signed sequence (e.g., in Figs. 2e and 2f). To refer to an activity performed with great intensity, the lips are spread wide with the teeth visible and clenched; this co-occurs with the sign STUDY. In addition to information conveyed through these NMS, the hand gesture is performed repetitively in a circular contour with smooth motion. This continuous action further distinguishes the meaning as "studying" instead of "study." In the following sections, we will consider issues related to the lexical form of signs and point out some pertinent issues with respect to two important aspects of signing, viz; modifications to gestures that carry grammatical meaning, and NMS.
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Manual Signing Expressing Lexical Meaning <s> Publisher Summary This chapter focuses on the internal structure of syllables in ASL, the language of deaf communities in the United States and most of Canada. The argument for ASL syllable structure is based primarily on distributional evidence for the distinction between the syllable nucleus and onsets and codas. The chapter explains the distribution of two phenomena—secondary movements and handshape changes—in strings of segments of the form, PMP, MP, PM, M, and P, where P is position and M is movement. Their distribution provides evidence for analyzing these five sign types as syllables. Each syllable has a nucleus. Those in PMP and PM have a P as onset, while those in PMP and MP have a P as coda. The way Ms and Ps are organized into syllables can be accounted for by positing a sign language analogue of the sonority hierarchy in which Ms are more sonorous than Ps. Sonority peaks are then syllable nuclei. This also provides evidence that sign language phonology has the analogue of vowels and consonants: Ms correspond to vowels and Ps to consonants. This follows from their relative sonority—from the fact that they play analogous roles in the organization of the phonological string into syllables. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Manual Signing Expressing Lexical Meaning <s> We are currently developing a vision-based sign language recognition system for mobile use. This requires operability in different environments with a large range of possible users, ideally under arbitrary conditions. In this paper, the problem of finding relevant information in single-view image sequences is tackled. We discuss some issues in low level image cues and present an approach for the fast detection of a signing persons hands. This is achieved by using a modified generic skin color model combined with pixel level motion information, which is obtained from motion history images. The approach is demonstrated with a watershed segmentation algorithm. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Manual Signing Expressing Lexical Meaning <s> This paper has the ambitious goal of outlining the phonological structures and proc- esses we have analyzed in American Sign Language (ASL). In order to do this we have divided the paper into five parts. In section 1 we detail the types of sequential phenomena found in the production of individual signs, allowing us to argue that ASL signs are composed of sequences of phonological segments, just as are words in spoken languages. Section 2 provides the details of a segmental phonetic tran- scription system. Using the descriptions made available by the transcription system, Section 3 briefly discusses both paradigmatic and syntagmatic contrast in ASL signs. Section 4 deals with the various types of phonological processes at work in the language, processes remarkable in their similarity to phonological processes found in spoken languages. We conclude the paper with an overview of the major typed of phonological effects of ASL's rich system of morphological processes. We realize that the majority of readers will come to this paper with neither sign language proficiency nor a knowledge of sign language structure. As a result, many will encounter reference to ASL signs without knowing their form. Although we have been unable to illustrate all the examples, we hope we have provided sufficient illustra- tions to make the paper more accessible. <s> BIB003
|
Sign linguists generally distinguish the basic components (or phoneme subunits) of a sign gesture as consisting of the handshape, hand orientation, location, and movement. Handshape refers to the finger configuration, orientation to the direction in which the palm and fingers are pointing, and location to where the hand is placed relative to the body. Hand movement traces out a trajectory in space. The first phonological model, proposed by Stokoe , emphasized the simultaneous organization of these subunits. In contrast, Liddell and Johnson's Movement-Hold model BIB003 emphasized sequential organization. Movement segments were defined as periods during which some part of the sign is in transition, whether handshape, hand location, or orientation. Hold segments are brief periods when all these parts are static. More recent models ( , BIB001 , , ) aim to represent both the simultaneous and sequential structure of signs and it would seem that the computational framework adopted for SL recognition must similarly be able to model both structures. There are a limited number of subunits which combine to make up all the possible signs, for e.g., 30 handshapes, 8 hand orientations, 20 locations, and 40 movement trajectory shapes BIB003 (different numbers are proposed according to the phonological model adopted). Breaking down signs into their constituent parts has been used by various researchers for devising classification frameworks (Section 3.3.2). All parts are important as evidenced by the existence of minimal signs which differ in only one of the basic parts (Fig. 3a) . When signs occur in a continuous sequence to form sentences, the hand(s) need to move from the ending location of one sign to the starting location of the next. Simultaneously, the handshape and hand orientation also change from the ending handshape and orientation of one sign to the starting handshape and orientation of the next. These intersign transition periods are called movement epenthesis BIB003 and are not part of either of the signs. Fig. 2b shows a frame within the movement epenthesis-the right hand is transiting from performing the sign YOU to the sign STUDY. In continuous signing, processes with effects similar to co-articulation in speech do also occur, where the appearance of a sign is affected by the preceding and succeeding signs (e.g., hold deletion, metathesis, and assimilation ). However, these processes do not necessarily occur in all signs; for example, hold deletion is variably applied depending on whether the hold involves BIB002 . Words in capital letters are sign glosses which represent signs with their closest meaning in English. contact with a body part BIB003 . Hence, movement epenthesis occurs most frequently during continuous signing and should probably be tackled first by machine analysis, before dealing with the other phonological processes. Some aspects of signing impact the methods used for feature extraction and classification, especially for visionbased approaches. First, while performing a sign gesture, the hand may be required to be at different orientations with respect to the signer's body and, hence, a fixed hand orientation from a single viewpoint cannot be assumed. Second, different types of movements are involved in signing. Generally, movement refers to the whole hand tracing a global 3D trajectory, as in the sign STUDY of Fig. 2 where the hand moves in a circular trajectory. However, there are other signs which involve local movements only, such as changing the hand orientation by twisting the wrist (e.g., CHINESE and SOUR, Fig. 3b ) or moving the fingers only (e.g., COLOR). This imposes conflicting requirements on the field of view; it must be large enough to capture the global motion, but at the same time, small local movements must not be lost. Third, both hands often touch or occlude each other when observed from a single viewpoint and, in some signs, the hands partially occlude the face, as in the signs CHINESE, SOUR, and COLOR. Hence, occlusion handling is an important consideration.
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Grammatical Processes in Sign Gestures <s> There are expressions using spatial relationships in sign language that are called directional verbs. To understand a sign-language sentence that includes a directional verb, it is necessary to analyze the spatial relationship between the recognized sign-language words and to find the proper combination of a directional verb and the sign-language words related to it. In this paper, we propose an analysis method for evaluatingthe spatial relationship between a directional verb and other sign-language words according to the distribution of the parameters representing the spatial relationship. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Grammatical Processes in Sign Gestures <s> A method for the representation, recognition, and interpretation of parameterized gesture is presented. By parameterized gesture we mean gestures that exhibit a systematic spatial variation; one example is a point gesture where the relevant parameter is the two-dimensional direction. Our approach is to extend the standard hidden Markov model method of gesture recognition by including a global parametric variation in the output probabilities of the HMM states. Using a linear model of dependence, we formulate an expectation-maximization (EM) method for training the parametric HMM. During testing, a similar EM algorithm simultaneously maximizes the output likelihood of the PHMM for the given sequence and estimates the quantifying parameters. Using visually derived and directly measured three-dimensional hand position measurements as input, we present results that demonstrate the recognition superiority of the PHMM over standard HMM techniques, as well as greater robustness in parameter estimation with respect to noise in the input features. Finally, we extend the PHMM to handle arbitrary smooth (nonlinear) dependencies. The nonlinear formulation requires the use of a generalized expectation-maximization (GEM) algorithm for both training and the simultaneous recognition of the gesture and estimation of the value of the parameter. We present results on a pointing gesture, where the nonlinear approach permits the natural spherical coordinate parameterization of pointing direction. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Grammatical Processes in Sign Gestures <s> Abstract In this paper, we present a new approach to recognizing hand signs. In this approach, motion recognition (the hand movement) is tightly coupled with spatial recognition (hand shape). The system uses multiclass, multidimensional discriminant analysis to automatically select the most discriminating linear features for gesture classification. A recursive partition tree approximator is proposed to do classification. This approach combined with our previous work on hand segmentation forms a new framework which addresses the three key aspects of hand sign interpretation: hand shape, location, and movement. The framework has been tested to recognize 28 different hand signs. The experimental results show that the system achieved a 93.2% recognition rate for test sequences that had not been used in the training phase. It is shown that our approach provide performance better than that of nearest neighbor classification in the eigensubspace. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Grammatical Processes in Sign Gestures <s> We present an algorithm for extracting and classifying two-dimensional motion in an image sequence based on motion trajectories. First, a multiscale segmentation is performed to generate homogeneous regions in each frame. Regions between consecutive frames are then matched to obtain two-view correspondences. Affine transformations are computed from each pair of corresponding regions to define pixel matches. Pixels matches over consecutive image pairs are concatenated to obtain pixel-level motion trajectories across the image sequence. Motion patterns are learned from the extracted trajectories using a time-delay neural network. We apply the proposed method to recognize 40 hand gestures of American Sign Language. Experimental results show that motion patterns of hand gestures can be extracted and recognized accurately using motion trajectories. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Grammatical Processes in Sign Gestures <s> Grammatical information conveyed through systematic temporal and spatial movement modifications is an integral aspect of sign language communication. We propose to model these systematic variations as simultaneous channels of information. Classification results at the channel level are output to Bayesian networks which recognize both the basic gesture meaning and the grammatical information (here referred to as layered meanings). With a simulated vocabulary of 6 basic signs and 5 possible layered meanings, test data for eight test subjects was recognized with 85.0% accuracy. We also adapt a system trained on three test subjects to recognize gesture data from a fourth person, based on a small set of adaptation data. We obtained gesture recognition accuracy of 88.5% which is a 75.7% reduction in error rate as compared to the unadopted system. <s> BIB005
|
The systematic changes to the sign appearance during continuous signing described above (addition of movement epenthesis, hold deletion, metathesis, assimilation) do not change the sign meaning. However, there are other systematic changes to one or more parts of the sign which affect the sign meaning, and these are briefly described in this section. In the sentence of Fig. 2 , the sign STUDY is inflected for temporal aspect. Here, the handshape, orientation, and location of the sign are basically the same as in its lexical form but the movement of the sign is modified to show how the action (STUDY) is performed with reference to time. Examples of other signs that can be inflected in this way are WRITE, SIT, and SICK (Klima and Bellugi lists 37 such signs). Fig. 4a shows examples of the sign ASK with different types of aspectual inflections. Generally, the meanings conveyed through these inflections are associated with aspects of the verbs that involve frequency, duration, recurrence, permanence, and intensity, and the sign's movement can be modified through its trajectory shape, rate, rhythm, and tension , . Klima Here, the verb indicates its subject and object by a change in the movement direction, with corresponding changes in its start and end locations, and hand orientation. Fig. 4b shows the sign ASK with different subject-object pairs. Other signs that can be similarly inflected include SHOW, GIVE, and INFORM (Padden lists 63 such verbs). These signs can also be inflected to show the number of persons in the subject and/or object, or show how the verb action is distributed with respect to the individuals participating in the action ( lists 10 different types of number agreement and distributional inflections, including dual, reciprocal, multiple, exhaustive, etc.). Verbs can be simultaneously inflected for person and number agreement. Other examples of grammatical processes which result in systematic variations in sign appearance include emphatic inflections, derivation of nouns from verbs, numerical incorporation, and compound signs. Emphatic inflections are used for the purpose of emphasis and are expressed through repetition in the sign's movement, with tension throughout. Appendix A (which can be found at www.computer.org/publications/dlib) has more details with illustrative photos and videos and discusses some implications for machine understanding. Classifier signs which can be constructed with innumerable variations are also discussed. Generally, there have been very few works that address inflectional and derivational processes that affect the spatial and temporal dimensions of sign appearance in systematic ways (as described in Section 2.2 and Appendix A at www.computer.org/publications/dlib). HMMs, which have been applied successfully to lexical sign recognition, are designed to tolerate variability in the timing of observation features which are the essence of temporal aspect inflections. The approach of mapping each isolated gesture sequence into a standard temporal length ( BIB003 , BIB004 ) causes loss of information on the movement dynamics. The few works that address grammatical processes in SL generally deal only with spatial variations. Sagawa and Takeuchi BIB001 deciphered the subject-object pairs of JSL verbs in sentences by learning the (Gaussian) probability densities of various spatial parameters of the verb's movement from training examples and, thus, calculated the probabilities of spatial parameters in test data. Six different sentences constructed from two verbs and three different subject-object pairs, which were tested on the same signer that provided the training set, was recognized with an average word accuracy of 93.4 percent. Braffort proposed an architecture where HMMs were employed for classifying lexical signs using all the features of the sign gesture (glove finger flexure values, tracker location and orientation), while verbs which can express person agreement were classified by their movement trajectory alone and classifier signs were classified by their finger flexure values only. Sentences comprising seven signs from the three different categories were successfully recognized with 92-96 percent word accuracy. They further proposed a rule-based interpreter module to establish the spatial relationship between the recognized signs, by maintaining a record of the sign articulations around the signing space. Although they were not applied to sign recognition, Parametric HMMs were proposed in BIB002 to estimate parameters representing systematic variations such as the distance between hands in a two-handed gesture and movement direction in a pointing gesture. However, it is unclear whether the method is suitable for larger vocabularies that exhibit multiple simultaneous variations. The works above only deal with a subset of possible spatial variations, with no straightforward extension to modeling systematic speed and timing variations. In Watanabe , however, both spatial size and speed information were extracted from two different musical conducting gestures with 90 percent success. This method first recognized the basic gesture using min/max points in the gesture trajectory and then measured the change in hand center-of-gravity between successive images to obtain gesture magnitude and speed information. In contrast, Ong and Ranganath BIB005 proposed an approach which simultaneously recognized the lexical meaning and the inflected meaning of gestures using Bayesian Networks. Temporal and spatial movement aspects that exhibit systematic variation (specifically movement size, direction, and speed profile) were categorized into distinct classes. Preliminary experimental results on classification of three motion trajectory shapes (straight line, arc, circle) and four types of systematic temporal and spatial modifications (increases in speed and/or size, even and uneven rhythms) often encountered in ASL yielded 85 percent accuracy for eight test subjects.
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Nonmanual Signals-NMS <s> This paper describes a vision-based method for recognizing the nonmanual information in Japanese Sign Language (JSL). This new modality information provides grammatical constraints useful for JSL word segmentation and interpretation. Our attention is focused on head motion, the most dominant non-manual information in JSL. We designed an interactive color-modeling scheme for robust face detection. Two video cameras are vertically arranged to take the frontal and profile image of the JSL user, and head motions are classified into eleven patterns. Moment-based feature and statistical motion feature are adopted to represent these motion patterns. Classification of the motion features is performed with linear discrimant analysis method. Initial experimental results show that the method has good recognition rate and can be realized in real-time. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Nonmanual Signals-NMS <s> Conventions used in the text 1. Linguistics and sign linguistics 2. BSL in its social context 3. Constructing sign sentences 4. Questions and negation 5. Mouth patterns and non-manual features in BSL 6. Morphology and morphemes in BSL 7. Aspect, manner and mood 8. Space types and verb types in BSL 9. The structure of gestures and signs 10. Visual motivation and metaphor 11. The established and productive lexicons 12. Borrowing and naming signs 13. Socially unacceptable signs 14. Extended use of language in BSL Table of illustrations Index Index of signs Bibliography. <s> BIB002
|
In the example of Fig. 2 , two facial expressions were performed, with some overlap in their duration. Spreading the lips wide (Figs. 2c and 2d ) is an example of using lower facial expressions, which generally provide information about a particular sign through use of the mouth area (lips, tongue, teeth, cheek) , . In other examples, tongue through front teeth indicates that something is done carelessly, without paying attention; this can co-occur with a variety of signs like SHOP, DRIVING. Cheeks puffed out describes an object (e.g., TREE, TRUCK, MAN) as big or fat. The other facial expression shown in Fig. 2 depicts raised eyebrows and widened eyes (Figs. 2e and 2f) , and is an example of using upper face expressions ( , ), which often occur in tandem with head and body movements (in Figs. 2e and 2f the head and body are tilted forward). They generally convey information indicating emphasis on a sign or different sentence types (i.e., question, negation, rhetorical, assertion, etc.), and involve eye blinks, eye gaze direction, eyebrows, and nose. The eyebrows can be raised in surprise or to ask a question, contracted for emphasis or to show anger, or be drawn down in a frown. The head can tilt up with chin pressed forward, nod, shake or be thrust forward. The body can lean forward or back, shift and turn to either side. Please refer to Appendix A (www.computer. org/publications/dlib) for more examples of NMS. Although the description above has focused on ASL, similar use of NMS and grammatical processes occur in SL(s) of other countries, e.g., Japan BIB001 , Taiwan , Britain BIB002 , Australia , Italy , and France . SL communication uses two-handed gestures and NMS; understanding SL therefore involves solving problems that are common to other research areas and applications. This includes tracking of the hands, face and body parts, feature extraction, modeling and recognition of time-varying signals, multimodal integration of information, etc. Due to the interconnectedness of these areas, there is a vast literature available, but our intention here is to only provide an overview of research specific to SL recognition.
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> Abstract This paper describes a method of classifying single view deaf-and-mute sign language motion images. We suppose the sign language word is composed of a time sequence of units called cheremes. The chereme is described by handshape, movement, and location of the hand, which can be said to express the 3-D features of the sign language. First, a dictionary for recognizing the sign language is made based on the cheremes. Then, the macro 2-D features of the location of a hand and its movement are extracted from the red component of the input color image sequence. Further, the micro 2-D features of the shape of the hand are also extracted if necessary. The 3-D feature descriptions of the dictionary are converted into 2-D image features, and the input sign language image is classified according to the extracted features of the 2-D image. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> Recent research on model-based image coding for videotelephone and videoconferencing applications has mostly been concerned with head motion tracking and typically represents the human head as a 3D wire-frame model with texture-mapped surface features. However, the movements of the arms and hands are also important, particularly in sign language communication, and therefore should be included in the overall model. The paper describes a system which uses an articulated generalised cylindrical human model to track limb movements in a sequence of images. It outlines the closed-loop strategy developed to recognise and track human body motion and presents initial results for a complete implementation of the system. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We present a prediction-and-verification segmentation scheme using attention images from multiple fixations. A major advantage of this scheme is that it can handle a large number of different deformable objects presented in complex backgrounds. The scheme is also relatively efficient. The system was tested to segment hands in sequences of intensity images, where each sequence represents a hand sign in American Sign Language. The experimental result showed a 95 percent correct segmentation rate with a 3 percent false rejection rate. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We present a system for recognising hand-gestures in Sign language. The system works in real-time and uses input from a colour video camera. The user wears different coloured gloves on either hand and colour matching is used to distinguish the hands from each other and from the background. So far the system has been tested in fixed lighting conditions, with the camera a fixed distance from the user. The system is user-dependent. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> Sign Language is a rich and expressive means of communication used by profoundly deaf people as an alternative to speech. Computer recognition of sign language represents a demanding research objective analogous to recognising continuous speech, but has many more potential applications in areas such as human body motion tracking and analysis, surveillance, video telephony, and non-invasive interfacing with virtual reality applications. In this paper, we survey those aspects of human body motion which are relevant to sign language, and outline an overall system architecture for computer vision-based sign language recognition. We then discuss the initial stages of processing required, and show how recognition of static manual and facial gestures can be used to provide the low-level features from which an integrated multi-channel dynamic gesture recognition system can be constructed. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We describe a video-based analysis system for acquisition and classification of hand-arm motion concerning German sign language. These motions are recorded with a single video camera by use of a modular framegrabber system. Data acquisition as well as motion classification are performed in real-time. A colour coded glove and coloured markers at the elbow and shoulder are used. These markers are segmented from the recorded input images as a first step of image processing. Thereafter features of these coloured areas are calculated which are used for determining the 20 positions for each frame and hence the positions of hand and arm. The missing third dimension is derived from a geometric model of the human hand-arm system. The sequence of the position data is converted into a certain representation of motion. Motion is derived from rule-based classification of the performed gesture, which yields a recognition rate of 95%. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper is concerned with the video-based recognition of signs. Concentrating on the manual parameters of sign language, the system aims for the signer dependent recognition of 262 different signs taken from Sign Language of the Netherlands. For Hidden Markov Modelling a sign is considered a doubly stochastic process, represented by an unobservable state sequence. The observations emitted by the states are regarded as feature vectors, that are extracted from video frames. This work deals with three topics: Firstly the recognition of isolated signs, secondly the influence of variations of the feature vector on the recognition rate and thirdly an approach for the recognition of connected signs. The system achieves recognition rates up to 94% for isolated signs and 73% for a reduced vocabulary of connected signs. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper documents the recognition method of deciphering Japanese sign language(JSL) using projected images. The goal of the movement recognition is to foster communication between hearing impaired and people capable of normal speech. We uses a stereo camera for recording three-dimensional movements, a image processing board for tracking movements, and a personal computer for an image processor charting the recognition of JSL patterns. This system works by formalizing tile space area of the signers according to the characteristics of the human body, determining components such as location and movements, and then recognizing sign language patterns. The system is able to recognize JSL by determining the extent of similarities in the sign field, and does so even when vibrations in hand movements occur and when there are differences in body build. We obtained useful results from recognition experiments in 38 different JSL in two signers. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We present an approach to continuous American sign language (ASL) recognition, which uses as input 3D data of arm motions. We use computer vision methods for 3D object shape and motion parameter extraction and an ascension technologies 'Flock of Birds' interchangeably to obtain accurate 3D movement parameters of ASL sentences, selected from a 53-sign vocabulary and a widely varied sentence structure. These parameters are used as features for hidden Markov models (HMMs). To address coarticulation effects and improve our recognition results, we experimented with two different approaches. The first consists of training context-dependent HMMs and is inspired by speech recognition systems. The second consists of modeling transient movements between signs and is inspired by the characteristics of ASL phonology. Our experiments verified that the second approach yields better recognition results. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> The paper describes a real-time system which tracks the uncovered/unmarked hands of a person performing sign language. It extracts the face and hand regions using their skin colors, computes blobs and then tracks the location of each hand using a Kalman filter. The system has been tested for hand tracking using actual sign-language motion by native signers. The experimental results indicate that the system is capable of tracking hands even while they are overlapping the face. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper presents a sign language recognition system which consists of three modules: model-based hand tracking, feature extraction, and gesture recognition using a 3D Hopfield neural network (HNN). The first one uses the Hausdorff distance measure to track shape-variant hand motion, the second one applies the scale and rotation-invariant Fourier descriptor to characterize hand figures, and the last one performs a graph matching between the input gesture model and the stored models by using a 3D modified HNN to recognize the gesture. Our system tests 15 different hand gestures. The experimental results show that our system can achieve above 91% recognition rate, and the recognition process time is about 10 s. The major contribution in this paper is that we propose a 3D modified HNN for gesture recognition which is more reliable than the conventional methods. <s> BIB012 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> A sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases. <s> BIB013 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> Tracking interacting human body parts from a single two-dimensional view is difficult due to occlusion, ambiguity and spatio-temporal discontinuities. We present a Bayesian network method for this task. The method is not reliant upon spatio-temporal continuity, but exploits it when present. Our inferencebased tracking model is compared with a CONDENSATION model augmented with a probabilistic exclusion mechanism. We show that the Bayesian network has the advantages of fully modelling the state space, explicitly representing domain knowledge, and handling complex interactions between variables in a globally consistent and computationally effective manner. <s> BIB014 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> Abstract In this paper, we present a new approach to recognizing hand signs. In this approach, motion recognition (the hand movement) is tightly coupled with spatial recognition (hand shape). The system uses multiclass, multidimensional discriminant analysis to automatically select the most discriminating linear features for gesture classification. A recursive partition tree approximator is proposed to do classification. This approach combined with our previous work on hand segmentation forms a new framework which addresses the three key aspects of hand sign interpretation: hand shape, location, and movement. The framework has been tested to recognize 28 different hand signs. The experimental results show that the system achieved a 93.2% recognition rate for test sequences that had not been used in the training phase. It is shown that our approach provide performance better than that of nearest neighbor classification in the eigensubspace. <s> BIB015 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We are currently developing a vision-based sign language recognition system for mobile use. This requires operability in different environments with a large range of possible users, ideally under arbitrary conditions. In this paper, the problem of finding relevant information in single-view image sequences is tackled. We discuss some issues in low level image cues and present an approach for the fast detection of a signing persons hands. This is achieved by using a modified generic skin color model combined with pixel level motion information, which is obtained from motion history images. The approach is demonstrated with a watershed segmentation algorithm. <s> BIB016 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper is concerned with the automatic recognition of German continuous sign language. For the most user-friendliness only one single color video camera is used for image recording. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system design, which are in general based on subunits, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs will be outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits called fenone. K-means algorithm is used for the definition of such fenones. The software prototype of the system is currently evaluated in experiments. <s> BIB017 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper introduces a model-based hand gesture recognition system, which consists of three phases: feature extraction, training, and recognition. In the feature extraction phase, a hybrid technique combines the spatial (edge) and the temporal (motion) information of each frame to extract the feature images. Then, in the training phase, we use the principal component analysis (PCA) to characterize spatial shape variations and the hidden Markov models (HMM) to describe the temporal shape variations. A modified Hausdorff distance measurement is also applied to measure the similarity between the feature images and the pre-stored PCA models. The similarity measures are referred to as the possible observations for each frame. Finally, in recognition phase, with the pre-trained PCA models and HMM, we can generate the observation patterns from the input sequences, and then apply the Viterbi algorithm to identify the gesture. In the experiments, we prove that our method can recognize 18 different continuous gestures effectively. <s> BIB018 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We present an algorithm for extracting and classifying two-dimensional motion in an image sequence based on motion trajectories. First, a multiscale segmentation is performed to generate homogeneous regions in each frame. Regions between consecutive frames are then matched to obtain two-view correspondences. Affine transformations are computed from each pair of corresponding regions to define pixel matches. Pixels matches over consecutive image pairs are concatenated to obtain pixel-level motion trajectories across the image sequence. Motion patterns are learned from the extracted trajectories using a time-delay neural network. We apply the proposed method to recognize 40 hand gestures of American Sign Language. Experimental results show that motion patterns of hand gestures can be extracted and recognized accurately using motion trajectories. <s> BIB019 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We present a system for tracking the hands of a user in a frontal camera view for gesture recognition purposes. The system uses multiple cues, incorporates tracing and prediction algorithms, and applies probabilistic inference to determine the trajectories of the hands reliably even in case of hand-face overlap. A method for assessing tracking quality is also introduced. Tests were performed with image sequences of 152 signs from German Sign Language, which have been segmented manually beforehand to offer a basis for quantitative evaluation. A hit rate of 81.1% was achieved on this material. <s> BIB020 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper deals with the automatic recognition of German signs. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system designs, which are in general based on phonemes, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs is outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected, as new signs can be added to the vocabulary database without re-training the existing hidden Markov models (HMMs) for subunits. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits. In first experiences a recognition accuracy of 92,5% was achieved for 100 signs, which were previously trained. For 50 new signs an accuracy of 81% was achieved without retraining of subunit-HMMs. <s> BIB021 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> Abstract In this paper, we introduce a hand gesture recognition system to recognize continuous gesture before stationary background. The system consists of four modules: a real time hand tracking and extraction, feature extraction, hidden Markov model (HMM) training, and gesture recognition. First, we apply a real-time hand tracking and extraction algorithm to trace the moving hand and extract the hand region, then we use the Fourier descriptor (FD) to characterize spatial features and the motion analysis to characterize the temporal features. We combine the spatial and temporal features of the input image sequence as our feature vector. After having extracted the feature vectors, we apply HMMs to recognize the input gesture. The gesture to be recognized is separately scored against different HMMs. The model with the highest score indicates the corresponding gesture. In the experiments, we have tested our system to recognize 20 different gestures, and the recognizing rate is above 90%. <s> BIB022 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> The ability to detect a persons unconstrained hand in a natural video sequence has applications in sign language, gesture recognition and HCl. This paper presents a novel, unsupervised approach to training an efficient and robust detector which is capable of not only detecting the presence of human hands within an image but classifying the hand shape. A database of images is first clustered using a k-method clustering algorithm with a distance metric based upon shape context. From this, a tree structure of boosted cascades is constructed. The head of the tree provides a general hand detector while the individual branches of the tree classify a valid shape as belong to one of the predetermined clusters exemplified by an indicative hand shape. Preliminary experiments carried out showed that the approach boasts a promising 99.8% success rate on hand detection and 97.4% success at classification. Although we demonstrate the approach within the domain of hand shape it is equally applicable to other problems where both detection and classification are required for objects that display high variability in appearance. <s> BIB023 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper describes a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features and yields extremely efficient classifiers [4]. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. A set of experiments in the domain of face detection are presented. The system yields face detection performance comparable to the best previous systems [16, 11, 14, 10, 1]. Implemented on a conventional desktop, face detection proceeds at 15 frames per second. Author email: fPaul.Viola,[email protected] c Compaq Computer Corporation, 2001 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of the Cambridge Research Laboratory of Compaq Computer Corporation in Cambridge, Massachusetts; an acknowledgment of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Cambridge Research Laboratory. All rights reserved. CRL Technical reports are available on the CRL’s web page at http://crl.research.compaq.com. Compaq Computer Corporation Cambridge Research Laboratory One Cambridge Center Cambridge, Massachusetts 02142 USA <s> BIB024
|
In order to capture the whole signing space, the entire upper body needs to be in the camera's field-of-view (FOV). The hand(s) must be located in the image sequence and this is generally implemented by using color, motion, and/or edge information. If skin-color detection is used, the signer is often required to wear long-sleeved clothing, with restrictions on other skin-colored objects in the background ( BIB016 , BIB010 , BIB013 , BIB014 , BIB011 , BIB019 , BIB020 ). Skin-color detection was combined with motion cues in Akyol and Alvarado BIB016 , Imagawa et al. BIB010 , Yang et al. BIB019 , and combined with edge detection in Terrillon et al. . The hands were distinguished from the face with the assumption that the head is relatively static in BIB016 , BIB010 , BIB011 , and that the head region is bigger in size in BIB019 . A multilayer perceptron neural network-based frontal face detector was used in for the same purpose. Color cue has also been used in conjunction with colored gloves ( BIB007 , BIB017 , BIB021 , BIB004 , BIB005 ). Motion cues were used in BIB003 , BIB015 , BIB012 , BIB018 , with the assumption that the hand is the only moving object on a stationary background and that the signer's torso and head are relatively still. Another common requirement is that the hand must be constantly moving. In Chen et al. BIB022 and Huang and Jeng BIB018 , the hand was detected by logically ANDing difference images with edge maps and skin-color regions. In Cui and Weng's system BIB003 , BIB015 , an outline of the motion-detected hand was obtained by mapping partial views of the hand to previously learned hand contours, using a hierarchical nearest neighbor decision rule. This yielded 95 percent hand detection accuracy, but at a high computational cost (58.3s per frame). Ong and Bowden BIB023 detected hands with 99.8 percent accuracy in grey scale images with shape information alone, using a boosted cascade of classifiers BIB024 . Signers were constrained to wear long-sleeved dark clothing, in front of mostly dark backgrounds. Tanibata et al. extracted skin, clothes, head, and elbow region by using a very restrictive person-specific template that required the signer to be seated in a known initial position/pose. Some of the other works also localized the body torso ( BIB007 , BIB017 , BIB021 , BIB001 ), elbow and shoulder ( BIB006 ), along with the hands and face, using color cues and knowledge of the body's geometry. This allowed the position and movement of the hands to be referenced to the signer's body. Two-dimensional tracking can be performed using blobbased ( BIB010 , BIB011 , ), view-based ( BIB018 ), or hand contour/ boundary models ( BIB022 , BIB015 , BIB012 ), or by matching motion segmented regions ( BIB019 ). Particularly challenging is tracking in the presence of occlusion. Some works avoid the occurrence of occlusion entirely by their choice of camera angle ( BIB019 ), sign vocabulary ( BIB022 , BIB012 , BIB018 , ), or by having signs performed unnaturally so as to avoid occluding the face ( BIB015 ). In these and other works, the left hand and/or face may be excluded from the image FOV ( BIB022 , BIB012 , BIB018 , BIB001 , ). Another simplification is to use colored gloves, whereby face/hand overlap becomes straightforward to deal with. In the case of unadorned hands, simple methods for tracking and dealing with occlusions are generally unsatisfactory. For example, prediction techniques are used to estimate hand location based on the model dynamics and previously known locations, with the assumption of small, continuous hand movement ( BIB022 , BIB010 , BIB011 , , BIB019 ). Starner et al.'s BIB011 method of subtracting the (assumed static) face region from the merged face/hand blob can only handle small overlaps. Overlapping hands were detected, but, for simplicity, features extracted from the merged blob were assigned to both hands. In addition, the left/right hand labels were always assigned to the left and right-most hand blobs, respectively. Imagawa et al. BIB010 also had problems dealing with complex bimanual hand movements (crossing, overlapping and bouncing back) as Kalman filters were used for each hand without data association. Tracking accuracy of 82-97 percent was obtained in a lab situation but this degraded to as low as 74 percent for a published videotape with realistic signing at natural speed and NMS (this violated their assumptions of small hand movement between adjacent frames and a relatively static head). Their later work BIB013 dealt with face/hand overlaps by applying a sliding observation window over the merged blob and computing the likelihood of the window subimage belonging to one of the possible handshape classes. Hand location was correctly determined with 85 percent success rate. Tanibata et al. distinguished the hands and face in cases of overlap by using texture templates from previously found hand and face regions. This method was found to be unsatisfactory when the interframe change in handshape, face orientation, or facial expression was large. The more robust tracking methods that can deal with fast, discontinous hand motion, significant overlap, and complex hand interactions do not track the hands and face separately, but rather apply probabilistic reasoning for simultaneous assignment of labels to the possible hand/face regions BIB020 , BIB014 . In both these works, the assumption is that only the two largest skin-colored blobs other than the head could be hands (thus restricting other skin-colored objects in the background and requiring long-sleeved clothing). Zieren et al. BIB020 tracked (with 81.1 percent accuracy) both hands and face in video sequences of 152 German Sign Language (GSL) signs. Probabilistic reasoning using heuristic rules (based on multiple features such as relative positions of hands, sizes of skin-colored blobs, and Kalman filter prediction) was applied for labeling detected skin-colored blobs. Sherrah and Gong BIB014 demonstrated similarly good results while allowing head and body movement with the assumption that the head can be tracked reliably . Multiple cues (motion, color, orientation, size and shape of clusters, distance relative to other body parts) were used to infer blob identities with a Bayesian Network whose structure and node conditional probability distributions represented constraints of articulated body parts. In contrast to the above works which use 2D approaches, Downton and Drouet BIB002 used a 3D model-based approach where they built a hierarchical cylindrical model of the upper body, and implemented a project-and-match process with detected edges in the image to obtain kinematic parameters for the model. Their method failed to track after a few frames due to error propagation in the motion estimates. There are also a few works that use multiple cameras to obtain 3D measurements, however at great computational cost. Matsuo et al. BIB008 used stereo cameras to localize the hands in 3D and estimate the location of body parts. Vogler and Metaxas BIB009 placed three cameras orthogonally to overcome occlusion, and used deformable models for the arm/hand in each of the three camera views. With regard to background complexity, several works use uniform backgrounds ( BIB007 , BIB017 , BIB021 , BIB012 , BIB008 , BIB001 , BIB019 , BIB020 ). Even with nonuniform background, background subtraction was usually not used to segment out the signer. Instead, the methods focused on using various cues to directly locate the hands, face, or other body parts with simplifying constraints. In contrast, Chen et al. BIB022 used background modeling and subtraction to extract the foreground within which the hand was located. This eases some imaging restrictions and constraints; BIB022 did not require colored gloves and long-sleeved clothing, and allowed complex cluttered background that included moving objects. However, the hand was required to be constantly moving. The imaging restrictions and constraints encountered in vision-based approaches are listed in Table 1 .
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Abstract This paper describes a method of classifying single view deaf-and-mute sign language motion images. We suppose the sign language word is composed of a time sequence of units called cheremes. The chereme is described by handshape, movement, and location of the hand, which can be said to express the 3-D features of the sign language. First, a dictionary for recognizing the sign language is made based on the cheremes. Then, the macro 2-D features of the location of a hand and its movement are extracted from the red component of the input color image sequence. Further, the micro 2-D features of the shape of the hand are also extracted if necessary. The 3-D feature descriptions of the dictionary are converted into 2-D image features, and the input sign language image is classified according to the extracted features of the 2-D image. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> I present a visual hand tracking system that can recover 3D hand shape and motion from a stream of 2D input images. The hand tracker was originally intended as part of a computer interface for (American) sign language signers, but the system may also serve as a general purpose hand tracking tool. In contrast to some previous 2D-to-sign approaches, I am taking the 3-dimensional nature of the signing process into account. My main objective was to create a versatile hand model and to design an algorithm that uses this model in an effective way to recover the 3D motion of the hand and fingers from 2D clues. The 2D clues are provided by colour-coded markers on the finger joints. The system then finds the 3D shape and motion of the hand by fitting a simple skeletonlike model to the joint locations found in the image. This fitting is done using a nonlinear, continuous optimization approach that gradually adjusts the pose of the model until correspondence with the image is reached. My present implementation of the tracker does not work in real time. However, it should be possible to achieve at least slow real-time tracking with appropriate hardware (a board for real-time image-capturing and colour-marker detection) and some code optimization. Such an 'upgraded7 version of the tracker might serve as a prototype for a Lcolour glove7 package providing a cheap and comfortable-though maybe less powerful-alternative to the data glove. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> !, Model-based vision is firmly established as a robust approach to recognizing and locating known rigid objects in the presence of noise, clutter, and occlusion. It is more problematic to apply modelbased methods to images of objects whose appearance can vary, though a number of approaches based on the use of flexible templates have been proposed. The problem with existing methods is that they sacrifice model specificity in order to accommodate variability, thereby compromising robustness during image interpretation. We argue that a model should only be able to deform in ways characteristic of the class of objects it represents. We describe a method for building models by learning patterns of variability from a training set of correctly annotated images. These models can be used for image search in an iterative refinement algorithm analogous to that employed by Active Contour Models (Snakes). The key difference is that our Active Shape Models can only deform to fit the data in ways consistent with the training set. We show several practical examples where we have built such models and used them to locate partially occluded objects in noisy, cluttered images. Q 199s A&& prrss, IN. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We present a prediction-and-verification segmentation scheme using attention images from multiple fixations. A major advantage of this scheme is that it can handle a large number of different deformable objects presented in complex backgrounds. The scheme is also relatively efficient. The system was tested to segment hands in sequences of intensity images, where each sequence represents a hand sign in American Sign Language. The experimental result showed a 95 percent correct segmentation rate with a 3 percent false rejection rate. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We describe a video-based analysis system for acquisition and classification of hand-arm motion concerning German sign language. These motions are recorded with a single video camera by use of a modular framegrabber system. Data acquisition as well as motion classification are performed in real-time. A colour coded glove and coloured markers at the elbow and shoulder are used. These markers are segmented from the recorded input images as a first step of image processing. Thereafter features of these coloured areas are calculated which are used for determining the 20 positions for each frame and hence the positions of hand and arm. The missing third dimension is derived from a geometric model of the human hand-arm system. The sequence of the position data is converted into a certain representation of motion. Motion is derived from rule-based classification of the performed gesture, which yields a recognition rate of 95%. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We present a system for recognising hand-gestures in Sign language. The system works in real-time and uses input from a colour video camera. The user wears different coloured gloves on either hand and colour matching is used to distinguish the hands from each other and from the background. So far the system has been tested in fixed lighting conditions, with the camera a fixed distance from the user. The system is user-dependent. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Sign Language is a rich and expressive means of communication used by profoundly deaf people as an alternative to speech. Computer recognition of sign language represents a demanding research objective analogous to recognising continuous speech, but has many more potential applications in areas such as human body motion tracking and analysis, surveillance, video telephony, and non-invasive interfacing with virtual reality applications. In this paper, we survey those aspects of human body motion which are relevant to sign language, and outline an overall system architecture for computer vision-based sign language recognition. We then discuss the initial stages of processing required, and show how recognition of static manual and facial gestures can be used to provide the low-level features from which an integrated multi-channel dynamic gesture recognition system can be constructed. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This work presents a design for a human computer interface capable of recognizing 25 gestures from the international hand alphabet in real-time. Principal Component Analysis (PCA) is used to extract features from images of gestures. The features represent gesture images in terms of an optimal coordinate system, in which the classes of gestures make up clusters. The system is divided into two parts: an off-line and an on-line part. The feature selection and generation of a classifier is performed off-line. On-line the obtained features and the classifier are used to classify new and unknown gesture images in real-time. Results show that an overall off-line recognition rate averaging 99% on 1500 images is achieved when trained on 1000 other images. The on-line system runs at 14 frames per second. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper is concerned with the video-based recognition of signs. Concentrating on the manual parameters of sign language, the system aims for the signer dependent recognition of 262 different signs taken from Sign Language of the Netherlands. For Hidden Markov Modelling a sign is considered a doubly stochastic process, represented by an unobservable state sequence. The observations emitted by the states are regarded as feature vectors, that are extracted from video frames. This work deals with three topics: Firstly the recognition of isolated signs, secondly the influence of variations of the feature vector on the recognition rate and thirdly an approach for the recognition of connected signs. The system achieves recognition rates up to 94% for isolated signs and 73% for a reduced vocabulary of connected signs. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> A new pattern matching method, the partly-hidden Markov model, is proposed for gesture recognition. The hidden Markov model, which is widely used for the time series pattern recognition, can deal with only piecewise stationary stochastic process. We solved this problem by introducing the modified second order Markov model, in which the first state is hidden and the second one is observable. As shown by the results of 6 sign-language recognition test, the error rate was improved by 73% compared with normal HMM. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper documents the recognition method of deciphering Japanese sign language(JSL) using projected images. The goal of the movement recognition is to foster communication between hearing impaired and people capable of normal speech. We uses a stereo camera for recording three-dimensional movements, a image processing board for tracking movements, and a personal computer for an image processor charting the recognition of JSL patterns. This system works by formalizing tile space area of the signers according to the characteristics of the human body, determining components such as location and movements, and then recognizing sign language patterns. The system is able to recognize JSL by determining the extent of similarities in the sign field, and does so even when vibrations in hand movements occur and when there are differences in body build. We obtained useful results from recognition experiments in 38 different JSL in two signers. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We present an approach to continuous American sign language (ASL) recognition, which uses as input 3D data of arm motions. We use computer vision methods for 3D object shape and motion parameter extraction and an ascension technologies 'Flock of Birds' interchangeably to obtain accurate 3D movement parameters of ASL sentences, selected from a 53-sign vocabulary and a widely varied sentence structure. These parameters are used as features for hidden Markov models (HMMs). To address coarticulation effects and improve our recognition results, we experimented with two different approaches. The first consists of training context-dependent HMMs and is inspired by speech recognition systems. The second consists of modeling transient movements between signs and is inspired by the characteristics of ASL phonology. Our experiments verified that the second approach yields better recognition results. <s> BIB012 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon. <s> BIB013 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper presents a sign language recognition system which consists of three modules: model-based hand tracking, feature extraction, and gesture recognition using a 3D Hopfield neural network (HNN). The first one uses the Hausdorff distance measure to track shape-variant hand motion, the second one applies the scale and rotation-invariant Fourier descriptor to characterize hand figures, and the last one performs a graph matching between the input gesture model and the stored models by using a 3D modified HNN to recognize the gesture. Our system tests 15 different hand gestures. The experimental results show that our system can achieve above 91% recognition rate, and the recognition process time is about 10 s. The major contribution in this paper is that we propose a 3D modified HNN for gesture recognition which is more reliable than the conventional methods. <s> BIB014 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> The paper describes a real-time system which tracks the uncovered/unmarked hands of a person performing sign language. It extracts the face and hand regions using their skin colors, computes blobs and then tracks the location of each hand using a Kalman filter. The system has been tested for hand tracking using actual sign-language motion by native signers. The experimental results indicate that the system is capable of tracking hands even while they are overlapping the face. <s> BIB015 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper presents a system for the recognition of sign language based on a theory of shape representation using size functions proposed by P. Frosini [5]. Our system consists of three modules: feature extraction, sign representation and sign recognition. The first performs an edge detection operation, the second uses size functions and inertia moments to represent hand signs, and the last uses a neural network to recognize hand gestures. Sign representation is an important step which we will deal with. Unlike previous work [15, 16], a new approach to the representation of hand gestures is proposed, based on size functions. Each sign is represented by means of a feature vector computed from a new pair of moment-based size functions. The work reported here indicates that moment-based size functions can be effectively used for the recognition of sign language even in the presence of shape changes due to differences in hands, position, style of signing, and viewpoint. <s> BIB016 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Since the human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research on view-independent object recognition. Due to the difficulties of the model-based approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of a small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set of predefined gesture commands, and it is also extended to hand detection. This algorithm can also apply to other object recognition tasks. <s> BIB017 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Abstract In this paper, we present a new approach to recognizing hand signs. In this approach, motion recognition (the hand movement) is tightly coupled with spatial recognition (hand shape). The system uses multiclass, multidimensional discriminant analysis to automatically select the most discriminating linear features for gesture classification. A recursive partition tree approximator is proposed to do classification. This approach combined with our previous work on hand segmentation forms a new framework which addresses the three key aspects of hand sign interpretation: hand shape, location, and movement. The framework has been tested to recognize 28 different hand signs. The experimental results show that the system achieved a 93.2% recognition rate for test sequences that had not been used in the training phase. It is shown that our approach provide performance better than that of nearest neighbor classification in the eigensubspace. <s> BIB018 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> A sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases. <s> BIB019 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Automatic gesture recognition systems generally require two separate processes: a motion sensing process where some motion features are extracted from the visual input; and a classification process where the features are recognised as gestures. We have developed the Hand Motion Understanding (HMU) system that uses the combination of a 3D model-based hand tracker for motion sensing and an adaptive fuzzy expert system for motion classification. The HMU system understands static and dynamic hand signs of the Australian Sign Language (Auslan). This paper presents the hand tracker that extracts 3D hand configuration data with 21 degrees-of-freedom (DOFs) from a 2D image sequence that is captured from a single viewpoint, with the aid of a colour-coded glove. Then the temporal sequence of 3D hand configurations detected by the tracker is recognised as a sign by an adaptive fuzzy expert system. The HMU system was evaluated with 22 static and dynamic signs. Before training the HMU system achieved 91% recognition, and after training it achieved over 95% recognition. <s> BIB020 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Tracking interacting human body parts from a single two-dimensional view is difficult due to occlusion, ambiguity and spatio-temporal discontinuities. We present a Bayesian network method for this task. The method is not reliant upon spatio-temporal continuity, but exploits it when present. Our inferencebased tracking model is compared with a CONDENSATION model augmented with a probabilistic exclusion mechanism. We show that the Bayesian network has the advantages of fully modelling the state space, explicitly representing domain knowledge, and handling complex interactions between variables in a globally consistent and computationally effective manner. <s> BIB021 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Hand gestures play an important role in communication between people during their daily lives. But the extensive use of hand gestures as a mean of communication can be found in sign languages. Sign language is the basic communication method between deaf people. A translator is usually needed when an ordinary person wants to communicate with a deaf one. The work presented in this paper aims at developing a system for automatic translation of gestures of the manual alphabets in the Arabic sign language. In doing so, we have designed a collection of ANFIS networks, each of which is trained to recognize one gesture. Our system does not rely on using any gloves or visual markings to accomplish the recognition job. Instead, it deals with images of bare hands, which allows the user to interact with the system in a natural way. An image of the hand gesture is processed and converted into a set of features that comprises of the lengths of some vectors which are selected to span the fingertips' region. The extracted features are rotation, scale, and translation invariat, which makes the system more flexible. The subtractive clustering algorithm and the least-squares estimator are used to identify the fuzzy inference system, and the training is achieved using the hybrid learning algorithm. Experiments revealed that our system was able to recognize the 30 Arabic manual alphabets with an accuracy of 93.55%. <s> BIB022 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> The accurate classification of hand gestures is crucial in the development of novel hand gesture-based systems designed for human-computer interaction (HCI) and for human alternative and augmentative communication (HAAC). A complete vision-based system, consisting of hand gesture acquisition, segmentation, filtering, representation and classification, is developed to robustly classify hand gestures. The algorithms in the subsystems are formulated or selected to optimality classify hand gestures. The gray-scale image of a hand gesture is segmented using a histogram thresholding algorithm. A morphological filtering approach is designed to effectively remove background and object noise in the segmented image. The contour of a gesture is represented by a localized contour sequence whose samples are the perpendicular distances between the contour pixels and the chord connecting the end-points of a window centered on the contour pixels. Gesture similarity is determined by measuring the similarity between the localized contour sequences of the gestures. Linear alignment and nonlinear alignment are developed to measure the similarity between the localized contour sequences. Experiments and evaluations on a subset of American Sign Language (ASL) hand gestures show that, by using nonlinear alignment, no gestures are misclassified by the system. Additionally, it is also estimated that real-time gesture classification is possible through the use of a high-speed PC, high-speed digital signal processing chips and code optimization. <s> BIB023 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper is concerned with the automatic recognition of German continuous sign language. For the most user-friendliness only one single color video camera is used for image recording. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system design, which are in general based on subunits, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs will be outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits called fenone. K-means algorithm is used for the definition of such fenones. The software prototype of the system is currently evaluated in experiments. <s> BIB024 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper introduces a model-based hand gesture recognition system, which consists of three phases: feature extraction, training, and recognition. In the feature extraction phase, a hybrid technique combines the spatial (edge) and the temporal (motion) information of each frame to extract the feature images. Then, in the training phase, we use the principal component analysis (PCA) to characterize spatial shape variations and the hidden Markov models (HMM) to describe the temporal shape variations. A modified Hausdorff distance measurement is also applied to measure the similarity between the feature images and the pre-stored PCA models. The similarity measures are referred to as the possible observations for each frame. Finally, in recognition phase, with the pre-trained PCA models and HMM, we can generate the observation patterns from the input sequences, and then apply the Viterbi algorithm to identify the gesture. In the experiments, we prove that our method can recognize 18 different continuous gestures effectively. <s> BIB025 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We are currently developing a vision-based sign language recognition system for mobile use. This requires operability in different environments with a large range of possible users, ideally under arbitrary conditions. In this paper, the problem of finding relevant information in single-view image sequences is tackled. We discuss some issues in low level image cues and present an approach for the fast detection of a signing persons hands. This is achieved by using a modified generic skin color model combined with pixel level motion information, which is obtained from motion history images. The approach is demonstrated with a watershed segmentation algorithm. <s> BIB026 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Principle Component Analysis (PCA) and Multiple Discriminant Analysis (MDA) have long been used for the appearance-based hand posture recognition. In this paper, we propose a novel PCA/MDA scheme for hand posture recognition. Unlike other PCA/MDA schemes, the PCA layer acts as a crude classification. Since posture alone cannot provide sufficient discriminating information, each input pattern will be given a likelihood of being in the nodes of PCA layers, instead of a strict division. Based on the Expectation-Maximization (EM) algorithm, we introduce three methods to estimate the parameters for this crude classification during training. The experiments on a 110-sign vocabulary show a significant improvement compared with the global PCA/MDA. <s> BIB027 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper deals with the automatic recognition of German signs. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system designs, which are in general based on phonemes, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs is outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected, as new signs can be added to the vocabulary database without re-training the existing hidden Markov models (HMMs) for subunits. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits. In first experiences a recognition accuracy of 92,5% was achieved for 100 signs, which were previously trained. For 50 new signs an accuracy of 81% was achieved without retraining of subunit-HMMs. <s> BIB028 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We present an algorithm for extracting and classifying two-dimensional motion in an image sequence based on motion trajectories. First, a multiscale segmentation is performed to generate homogeneous regions in each frame. Regions between consecutive frames are then matched to obtain two-view correspondences. Affine transformations are computed from each pair of corresponding regions to define pixel matches. Pixels matches over consecutive image pairs are concatenated to obtain pixel-level motion trajectories across the image sequence. Motion patterns are learned from the extracted trajectories using a time-delay neural network. We apply the proposed method to recognize 40 hand gestures of American Sign Language. Experimental results show that motion patterns of hand gestures can be extracted and recognized accurately using motion trajectories. <s> BIB029 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Abstract We demonstrate that a small number of 2D linear statistical models are sufficient to capture the shape and appearance of a face from a wide range of viewpoints. Such models can be used to estimate head orientation and track faces through large angles. Given multiple images of the same face we can learn a coupled model describing the relationship between the frontal appearance and the profile of a face. This relationship can be used to predict new views of a face seen from one view and to constrain search algorithms which seek to locate a face in multiple views simultaneously. <s> BIB030 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We present a system for tracking the hands of a user in a frontal camera view for gesture recognition purposes. The system uses multiple cues, incorporates tracing and prediction algorithms, and applies probabilistic inference to determine the trajectories of the hands reliably even in case of hand-face overlap. A method for assessing tracking quality is also introduced. Tests were performed with image sequences of 152 signs from German Sign Language, which have been segmented manually beforehand to offer a basis for quantitative evaluation. A hit rate of 81.1% was achieved on this material. <s> BIB031 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Abstract In this paper, we introduce a hand gesture recognition system to recognize continuous gesture before stationary background. The system consists of four modules: a real time hand tracking and extraction, feature extraction, hidden Markov model (HMM) training, and gesture recognition. First, we apply a real-time hand tracking and extraction algorithm to trace the moving hand and extract the hand region, then we use the Fourier descriptor (FD) to characterize spatial features and the motion analysis to characterize the temporal features. We combine the spatial and temporal features of the input image sequence as our feature vector. After having extracted the feature vectors, we apply HMMs to recognize the input gesture. The gesture to be recognized is separately scored against different HMMs. The model with the highest score indicates the corresponding gesture. In the experiments, we have tested our system to recognize 20 different gestures, and the recognizing rate is above 90%. <s> BIB032
|
Research has focused on understanding hand signing in SL or, in the more restrictive case, classification of fingerspelled alphabets and numbers. For the former, the FOV includes the upper body of the signer, allowing the hands the range of movement required for signing. For fingerspelling, the range of hand motion is very small and consists mainly of finger configuration and orientation information. For full signing scenarios, features that characterize whole hand location and movement as well as appearance features that result from handshape and orientation are extracted, whereas for fingerspelling only the latter features are used. Thus, for works where the goal is classification of fingerspelling or handshape ( BIB022 , BIB008 , , BIB027 , BIB023 , BIB016 , BIB017 ), the entire FOV only contains the hand. In these works (with the exception of BIB017 ), the hand is generally restricted to palm facing the camera, against a uniform background. For full signing scenarios, a commonly extracted positional feature is the center-of-gravity of the hand blob. This can be measured in absolute image coordinates ( BIB013 ), relative to the face or body ( BIB009 , BIB024 , BIB028 , BIB010 , BIB001 , ), relative to the first gesture frame ( BIB018 ), or relative to the previous frame ( BIB010 ). Alternatively, motion features have been used to characterize hand motion, e.g., motion trajectories of hand pixels BIB029 or optical flow BIB032 . The above approaches extract measurements and features in 2D. In an effort to obtain 3D measurements, Hienz et al. BIB005 proposed a simple geometric model of the hand/arm to estimate the hand's distance to camera using the shoulder, elbow, and hand's 2D positions. Approaches which directly measure 3D position using multiple cameras provide better accuracy but at the cost of higher computational complexity. Matsuo et al.'s BIB011 stereo camera system found the 3D position of both hands in a body-centered coordinate frame. Volger and Metaxas' BIB012 orthogonal camera system extracted the 3D wrist position coordinates and orientation parameters relative to the signer's spine. The variety of hand appearance features include: segmented hand images, binary hand silhouettes or hand blobs, and hand contours. Segmented hand images are usually normalized for size, in-plane orientation, and/or illumination ( BIB018 , BIB006 ), and principal component analysis (PCA) is often applied for dimensionality reduction before further processing ( BIB008 , BIB027 , BIB019 , BIB017 ). In Starner et al. BIB013 and Tanibata et al. , geometric moments were calculated from the hand blob. Assan and Grobel BIB009 , Bauer and Kraiss BIB024 , BIB028 , calculated the sizes, distances, and angles between distinctly colored fingers, palm, and back of the hand. Contour-based representations include various translation, scale, and/or in-plane rotation invariant features such as, Fourier descriptors (FD) BIB032 , BIB014 , BIB007 , size functions BIB016 , the lengths of vectors from the hand centroid to the fingertips region BIB022 , and localized contour sequences BIB023 . Huang and Jeng BIB025 represented hand contours with Active Shape Models BIB003 , and extracted a modified Hausdorff distance measure between the prestored shape models and the hand contour in the input test image. Bowden and Sahardi used PCA on training hand contours, but constructed nonlinear Point Distribution Models by piecewise linear approximation with clusters. Hand contour tracking was applied on a fingerspelling video sequence, and the model transited between clusters with probabilities that reflected information about shape space and alphabet probabilities in English. Though contour-based representations use invariant features, they may generally suffer from ambiguities resulting from different handshapes with similar contours. All of the above methods extracted 2D hand appearance features. In contrast, Holden and Owens BIB020 and Dorner BIB002 employed a 3D model-based approach to estimate finger joint angles and 3D hand orientation. In both works, finger joints and wrist were marked with distinct colors, and a 3D hand model was iteratively matched to the image content by comparing the projections of the hand model's joints with the corresponding joint markers detected in the image. Holden and Owens BIB020 could deal with missing markers due to the hand's self-occlusion by Kalman filter prediction. However, hand orientation was restricted to palm facing the camera. Dorner BIB002 estimated the hand model state based on constraints on the possible range of joint angles and state transitions, to successfully track in presence of out-of-plane rotations. However, processing speed was quite slow, requiring 5-6s per frame. In these and other works using 3D hand models ( , ), the image FOV is assumed to contain only the hand with high resolution. In a sign recognition system however, the image FOV would contain the entire upper body; hence, the hand size would be small. In addition, these works do not consider situations when the hand is partially occluded (for example, by the other hand). Fillbrandt et al. attempt to address the shortcomings of the above approaches which directly find correspondence between image features and the 3D hand model. They used a network of 2D Active Appearance Models BIB030 as an intermediate representation between image features, and a simplified 3D hand model with 9 degrees-of-freedom. Experimental results with high-resolution images of the hand against uniform background yielded an average error of 10 percent in estimating finger parameters, while error for estimating the 3D hand orientation was 10 -20 . The system ran at 4 fps on a 1GHz Pentium III and they obtained some good results with low resolution images and partly missing image information. However, further work is needed before the model can be applied to a natural signing environment. In terms of processing speed, methods that operate at near real-time for tracking and/or feature extraction (roughly 4-16 fps) include BIB026 , BIB009 , BIB024 , BIB028 , BIB005 , BIB015 , BIB013 , BIB031 . Some of the other methods were particularly slow, for example: 1.6s per frame (PII-330M) for tracking in Sherrah and Gong BIB021 , several seconds per frame for feature extraction in Tamura and Kawasaki BIB001 , 58.3s per frame (SGI INDIGO 2 workstation) for hand segmentation in Cui and Weng BIB004 , 60s for hand segmentation, and 70s for feature estimation in Huang and Jeng BIB025 . Direct-measure devices use trackers to directly measure the 3D position and orientation of the hand(s), and gloves to measure finger joint angles. More details on feature estimation from direct-measure devices can be found in Appendix C (www.computer.org/publications/dlib).
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A gesture recognition method for Japanese sign language is presented. We have developed a posture recognition system using neural networks which could recognize a finger alphabet of 42 symbols. We then developed a gesture recognition system where each gesture specifies a word. Gesture recognition is more difficult than posture recognition because it has to handle dynamic processes. To deal with dynamic processes we use a recurrent neural network. Here, we describe a gesture recognition method which can recognize continuous gesture. We then discuss the results of our research. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A supervised learning neural network classifier that utilizes fuzzy sets as pattern classes is described. Each fuzzy set is an aggregate (union) of fuzzy set hyperboxes. A fuzzy set hyperbox is an n-dimensional box defined by a min point and a max point with a corresponding membership function. The min-max points are determined using the fuzzy min-max learning algorithm, an expansionxontraction process that can learn nonlinear class boundaries in a single pass through the data and provides the ability to incorporate new and refine existing classes without retraining. The use of a fuzzy set approach to pattern classification inherently provides degree of membership information that is extremely useful in higher level decision mak- ing. This paper will describe the relationship between fuzzy sets and pattern classification. It explains the fuzzy min-max classifier neural network implementation, it outlines the learning and recall algorithms, and it provides several examples of operation that demonstrate the strong qualities of this new neural network classifier. AmRN classification is a key element to many engi- P neering solutions. Sonar, radar, seismic, and diagnostic applications all require the ability to accurately classify a situation. Control, tracking, and prediction systems will often use classifiers to determine input-output relationships. Because of this wide range of applicability, pattern classification has been studied a great deal (13), (15), (19). This paper describes a neural network classifier that creates classes by aggregating several smaller fuzzy sets into a single fuzzy set class. This technique, introduced in (42) as an extension of earlier work (41), can learn pattern classes in a single pass through the data, it can add new pattern classes on the fly, it can refine existing pattern classes as new information is received, and it uses simple operations that allow for quick execution. Fuzzy min-max classification neural networks are built using hyperbox fuzzy sets. A hyperbox defines a region of the n-dimensional pattern space that has patterns with full class membership. A hyperbox is completely defined by its min point and its max point, and a membership function is defined with respect to these hyperbox min-max points. The min-max (hyperbox) membership function combination defines a fuzzy set, hyperbox fuzzy sets are aggregated to form a single fuzzy set class, and the resulting structure fits naturally into a neural network framework; hence this classification system is called a fuzzy min-max classification neural network. Learning in the fuzzy min-max classification neural network is performed by properly placing and adjusting hyperboxes in the pattern space. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> The architecture and learning procedure underlying ANFIS (adaptive-network-based fuzzy inference system) is presented, which is a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. In the simulation, the ANFIS architecture is employed to model nonlinear functions, identify nonlinear components on-line in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificial neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. > <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> The design and evaluation of a two-stage neural network which can recognize isolated ASL signs is given. The input to this network is the hand shape and position data obtained from a DataGlove mounted with a Polhemus sensor. The first level consists of four backpropagation neural networks which can recognize the sign language phonology, namely, the 36 hand shapes, 10 locations, 11 orientations, and 11 hand movements. The recognized phonemes from the beginning, middle, and end of the sign are fed to the second stage which recognizes the actual signs. Both backpropagation and Kohonen's self-organizing neural work was used to compare the performance and the expandability of the learned vocabulary. In the current work, six signers with differing hand sizes signed 14 signs which included hand shape, position, and motion fragile and triple robust signs. When a backpropagation network was used for the second stage, the results show that the network was able to recognize these signs with an overall accuracy of 86%. Further, the recognition results were linearly dependent on the size of the finger in relation to the metacarpophalangeal joint and the total length of the hand. When the second stage was a Kohonen's self-organizing network, the network could not only recognize the signs with 84% accuracy, but also expand its learned vocabulary through relabeling. > <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> We explore recognition implications of understanding gesture communication, having chosen American sign language as an example of a gesture language. An instrumented glove and specially developed software have been used for data collection and labeling. We address the problem of recognizing dynamic signing, i.e. signing performed at natural speed. Two neural network architectures have been used for recognition of different types of finger-spelled sentences. Experimental results are presented suggesting that two features of signing affect recognition accuracy: signing frequency which to a large extent can be accounted for by training a network on the samples of the respective frequency; and coarticulation effect which a network fails to identify. As a possible solution to coarticulation problem two post-processing algorithms for temporal segmentation are proposed and experimentally evaluated. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> The sign language is a method of communication for the deaf-mute. Articulated gestures and postures of hands and fingers are commonly used for the sign language. This paper presents a system which recognizes the Korean sign language (KSL) and translates into a normal Korean text. A pair of data-gloves are used as the sensing device for detecting motions of hands and fingers. For efficient recognition of gestures and postures, a technique of efficient classification of motions is proposed and a fuzzy min-max neural network is adopted for on-line pattern recognition. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> We present a system for recognising hand-gestures in Sign language. The system works in real-time and uses input from a colour video camera. The user wears different coloured gloves on either hand and colour matching is used to distinguish the hands from each other and from the background. So far the system has been tested in fixed lighting conditions, with the camera a fixed distance from the user. The system is user-dependent. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> We describe a video-based analysis system for acquisition and classification of hand-arm motion concerning German sign language. These motions are recorded with a single video camera by use of a modular framegrabber system. Data acquisition as well as motion classification are performed in real-time. A colour coded glove and coloured markers at the elbow and shoulder are used. These markers are segmented from the recorded input images as a first step of image processing. Thereafter features of these coloured areas are calculated which are used for determining the 20 positions for each frame and hence the positions of hand and arm. The missing third dimension is derived from a geometric model of the human hand-arm system. The sequence of the position data is converted into a certain representation of motion. Motion is derived from rule-based classification of the performed gesture, which yields a recognition rate of 95%. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A new pattern matching method, the partly-hidden Markov model, is proposed for gesture recognition. The hidden Markov model, which is widely used for the time series pattern recognition, can deal with only piecewise stationary stochastic process. We solved this problem by introducing the modified second order Markov model, in which the first state is hidden and the second one is observable. As shown by the results of 6 sign-language recognition test, the error rate was improved by 73% compared with normal HMM. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This work presents a design for a human computer interface capable of recognizing 25 gestures from the international hand alphabet in real-time. Principal Component Analysis (PCA) is used to extract features from images of gestures. The features represent gesture images in terms of an optimal coordinate system, in which the classes of gestures make up clusters. The system is divided into two parts: an off-line and an on-line part. The feature selection and generation of a classifier is performed off-line. On-line the obtained features and the classifier are used to classify new and unknown gesture images in real-time. Results show that an overall off-line recognition rate averaging 99% on 1500 images is achieved when trained on 1000 other images. The on-line system runs at 14 frames per second. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Reducing or eliminating statistical redundancy between the components of high-dimensional vector data enables a lower-dimensional representation without significant loss of information. Recognizing the limitations of principal component analysis (PCA), researchers in the statistics and neural network communities have developed nonlinear extensions of PCA. This article develops a local linear approach to dimension reduction that provides accurate representations and is fast to compute. We exercise the algorithms on speech and image data, and compare performance with PCA and with neural network implementations of nonlinear PCA. We find that both nonlinear techniques can provide more accurate representations than PCA and show that the local linear techniques outperform neural network implementations. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This paper documents the recognition method of deciphering Japanese sign language(JSL) using projected images. The goal of the movement recognition is to foster communication between hearing impaired and people capable of normal speech. We uses a stereo camera for recording three-dimensional movements, a image processing board for tracking movements, and a personal computer for an image processor charting the recognition of JSL patterns. This system works by formalizing tile space area of the signers according to the characteristics of the human body, determining components such as location and movements, and then recognizing sign language patterns. The system is able to recognize JSL by determining the extent of similarities in the sign field, and does so even when vibrations in hand movements occur and when there are differences in body build. We obtained useful results from recognition experiments in 38 different JSL in two signers. <s> BIB012 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures. <s> BIB013 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This paper presents a sign language recognition system which consists of three modules: model-based hand tracking, feature extraction, and gesture recognition using a 3D Hopfield neural network (HNN). The first one uses the Hausdorff distance measure to track shape-variant hand motion, the second one applies the scale and rotation-invariant Fourier descriptor to characterize hand figures, and the last one performs a graph matching between the input gesture model and the stored models by using a 3D modified HNN to recognize the gesture. Our system tests 15 different hand gestures. The experimental results show that our system can achieve above 91% recognition rate, and the recognition process time is about 10 s. The major contribution in this paper is that we propose a 3D modified HNN for gesture recognition which is more reliable than the conventional methods. <s> BIB014 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon. <s> BIB015 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove. Sign language, which is usually known as a set of natural language with formal semantic definitions and syntactic rules, is a large set of hand gestures that are daily used to communicate with the hearing impaired. The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to four parameters in a gesture: posture, position, orientation, and motion. The authors have implemented a prototype system with a lexicon of 250 vocabularies and collected 196 training sentences in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signer-dependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%,. <s> BIB016 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This paper presents a system for the recognition of sign language based on a theory of shape representation using size functions proposed by P. Frosini [5]. Our system consists of three modules: feature extraction, sign representation and sign recognition. The first performs an edge detection operation, the second uses size functions and inertia moments to represent hand signs, and the last uses a neural network to recognize hand gestures. Sign representation is an important step which we will deal with. Unlike previous work [15, 16], a new approach to the representation of hand gestures is proposed, based on size functions. Each sign is represented by means of a feature vector computed from a new pair of moment-based size functions. The work reported here indicates that moment-based size functions can be effectively used for the recognition of sign language even in the presence of shape changes due to differences in hands, position, style of signing, and viewpoint. <s> BIB017 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> In this paper, a system designed for helping the deaf to communicate with others is presented. Some useful new ideas are proposed in design and implementation. An algorithm based on geometrical analysis for the purpose of extracting invariant feature to signer position is presented. An ANN–DP combined approach is employed for segmenting subwords automatically from the data stream of sign signals. To tackle the epenthesis movement problem, a DP-based method has been used to obtain the context-dependent models. Some techniques for system implementation are also given, including fast matching, frame prediction and search algorithms. The implemented system is able to recognize continuous large vocabulary Chinese Sign Language. Experiments show that proposed techniques in this paper are efficient on either recognition speed or recognition performance. <s> BIB018 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Gesture based applications widely range from replacing the traditional mouse as a position device to virtual reality and communication with the deaf. The article presents a fuzzy rule based approach to spatio-temporal hand gesture recognition. This approach employs a powerful method based on hyperrectangutar composite neural networks (HRCNNs) for selecting templates. Templates for each hand shape are represented in the form of crisp IF-THEN rules that are extracted from the values of synaptic weights of the corresponding trained HRCNNs. Each crisp IF-THEN rule is then fuzzified by employing a special membership function in order to represent the degree to which a pattern is similar to the corresponding antecedent part. When an unknown gesture is to be classified, each sample of the unknown gesture is tested by each fuzzy rule. The accumulated similarity associated with all samples of the input is computed for each hand gesture in the vocabulary, and the unknown gesture is classified as the gesture yielding the highest accumulative similarity. Based on the method we can implement a small-sized dynamic hand gesture recognition system. Two databases which consisted of 90 spatio-temporal hand gestures are utilized for verifying its performance. An encouraging experimental result confirms the effectiveness of the proposed method. <s> BIB019 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases. <s> BIB020 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Abstract In this paper, we present a new approach to recognizing hand signs. In this approach, motion recognition (the hand movement) is tightly coupled with spatial recognition (hand shape). The system uses multiclass, multidimensional discriminant analysis to automatically select the most discriminating linear features for gesture classification. A recursive partition tree approximator is proposed to do classification. This approach combined with our previous work on hand segmentation forms a new framework which addresses the three key aspects of hand sign interpretation: hand shape, location, and movement. The framework has been tested to recognize 28 different hand signs. The experimental results show that the system achieved a 93.2% recognition rate for test sequences that had not been used in the training phase. It is shown that our approach provide performance better than that of nearest neighbor classification in the eigensubspace. <s> BIB021 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Since the human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research on view-independent object recognition. Due to the difficulties of the model-based approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of a small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set of predefined gesture commands, and it is also extended to hand detection. This algorithm can also apply to other object recognition tasks. <s> BIB022 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Sign language is the language used by the deaf, which is a comparatively steadier expressive system composed of signs corresponding to postures and motions assisted by facial expression. The objective of sign language recognition research is to "see" the language of deaf. The integration of sign language recognition and sign language synthesis jointly comprise a "human-computer sign language interpreter", which facilitates the interaction between deaf and their surroundings. Considering the speed and performance of the recognition system, Cyberglove is selected as gesture input device in our sign language recognition system, Semi-Continuous Dynamic Gaussian Mixture Model (SCDGMM) is used as recognition technique, and a search scheme based on relative entropy is proposed and is applied to SCDGMM-based sign word recognition. Comparing with SCDGMM recognizer without searching scheme, the recognition time of SCDGMM recognizer with searching scheme reduces almost 15 times. <s> BIB023 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> In this paper 3-layer feedforward network is introduced to recognize Chinese manual alphabet, and Single Parameter Dynamic Search Algorithm(SPDS) is used to learn net parameters. In addition, a recognition algorithm for recognizing manual alphabets based on multi-features and multi-classifiers is proposed to promote the recognition performance of finger-spelling. From experiment result, it is shown that Chinese finger-spelling recognition based on multi-features and multi-classifiers outperforms its recognition based on single-classifier. <s> BIB024 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Hand gestures play an important role in communication between people during their daily lives. But the extensive use of hand gestures as a mean of communication can be found in sign languages. Sign language is the basic communication method between deaf people. A translator is usually needed when an ordinary person wants to communicate with a deaf one. The work presented in this paper aims at developing a system for automatic translation of gestures of the manual alphabets in the Arabic sign language. In doing so, we have designed a collection of ANFIS networks, each of which is trained to recognize one gesture. Our system does not rely on using any gloves or visual markings to accomplish the recognition job. Instead, it deals with images of bare hands, which allows the user to interact with the system in a natural way. An image of the hand gesture is processed and converted into a set of features that comprises of the lengths of some vectors which are selected to span the fingertips' region. The extracted features are rotation, scale, and translation invariat, which makes the system more flexible. The subtractive clustering algorithm and the least-squares estimator are used to identify the fuzzy inference system, and the training is achieved using the hybrid learning algorithm. Experiments revealed that our system was able to recognize the 30 Arabic manual alphabets with an accuracy of 93.55%. <s> BIB025 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This paper is concerned with the automatic recognition of German continuous sign language. For the most user-friendliness only one single color video camera is used for image recording. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system design, which are in general based on subunits, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs will be outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits called fenone. K-means algorithm is used for the definition of such fenones. The software prototype of the system is currently evaluated in experiments. <s> BIB026 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A divide-and-conquer approach is presented for signer-independent continuous Chinese Sign Language(CSL) recognition in this paper. The problem of continuous CSL recognition is divided into the subproblems of isolated CSL recognition. The simple recurrent network(SRN) and the hidden Markov models(HMM) are combined in this approach. The improved SRN is introduced for segmentation of continuous CSL. Outputs of SRN are regarded as the states of HMM, and the Lattice Viterbi algorithm is employed to search the best word sequence in the HMM framework. Experimental results show SRN/HMM approach has better performance than the standard HMM one. <s> BIB027 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> The accurate classification of hand gestures is crucial in the development of novel hand gesture-based systems designed for human-computer interaction (HCI) and for human alternative and augmentative communication (HAAC). A complete vision-based system, consisting of hand gesture acquisition, segmentation, filtering, representation and classification, is developed to robustly classify hand gestures. The algorithms in the subsystems are formulated or selected to optimality classify hand gestures. The gray-scale image of a hand gesture is segmented using a histogram thresholding algorithm. A morphological filtering approach is designed to effectively remove background and object noise in the segmented image. The contour of a gesture is represented by a localized contour sequence whose samples are the perpendicular distances between the contour pixels and the chord connecting the end-points of a window centered on the contour pixels. Gesture similarity is determined by measuring the similarity between the localized contour sequences of the gestures. Linear alignment and nonlinear alignment are developed to measure the similarity between the localized contour sequences. Experiments and evaluations on a subset of American Sign Language (ASL) hand gestures show that, by using nonlinear alignment, no gestures are misclassified by the system. Additionally, it is also estimated that real-time gesture classification is possible through the use of a high-speed PC, high-speed digital signal processing chips and code optimization. <s> BIB028 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> We present an algorithm for extracting and classifying two-dimensional motion in an image sequence based on motion trajectories. First, a multiscale segmentation is performed to generate homogeneous regions in each frame. Regions between consecutive frames are then matched to obtain two-view correspondences. Affine transformations are computed from each pair of corresponding regions to define pixel matches. Pixels matches over consecutive image pairs are concatenated to obtain pixel-level motion trajectories across the image sequence. Motion patterns are learned from the extracted trajectories using a time-delay neural network. We apply the proposed method to recognize 40 hand gestures of American Sign Language. Experimental results show that motion patterns of hand gestures can be extracted and recognized accurately using motion trajectories. <s> BIB029 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This paper deals with the automatic recognition of German signs. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system designs, which are in general based on phonemes, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs is outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected, as new signs can be added to the vocabulary database without re-training the existing hidden Markov models (HMMs) for subunits. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits. In first experiences a recognition accuracy of 92,5% was achieved for 100 signs, which were previously trained. For 50 new signs an accuracy of 81% was achieved without retraining of subunit-HMMs. <s> BIB030 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Hitherto, the major challenge to sign language recognition is how to develop approaches that scale well with increasing vocabulary size. We present an approach to large vocabulary, continuous Chinese sign language (CSL) recognition that uses phonemes instead of whole signs as the basic units. Since the number of phonemes is limited, HMM-based training and recognition of the CSL signal becomes more tractable and has the potential to recognize enlarged vocabularies. Furthermore, the proposed method facilitates the CSL recognition when the finger-alphabet is blended with gestures. About 2400 phonemes are defined for CSL. One HMM is built for each phoneme, and then the signs are encoded based on these phonemes. A decoder that uses a tree-structured network is presented. Clustering of the Gaussians on the states, the language model and N-best-pass is used to improve the performance of the system. Experiments on a 5119 sign vocabulary are carried out, and the result is exciting. <s> BIB031 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A new method to recognize continuous sign language based on hidden Markov model is proposed. According to the dependence of linguistic context, connections between elementary subwords are classified as strong connection and weak connection. The recognition of strong connection is accomplished with the aid of subword trees, which describe the connection of subwords in each sign language word. In weak connection, the main problem is how to extract the best matched subwords and find their end-points with little help of context information. The proposed method improves the summing process of the Viterbi decoding algorithm which is constrained in every individual model, and compares the end score at each frame to find the ending frame of a subword. Experimental results show an accuracy of 70% for continuous sign sentences that comprise no more than 4 subwords. <s> BIB032 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Principle Component Analysis (PCA) and Multiple Discriminant Analysis (MDA) have long been used for the appearance-based hand posture recognition. In this paper, we propose a novel PCA/MDA scheme for hand posture recognition. Unlike other PCA/MDA schemes, the PCA layer acts as a crude classification. Since posture alone cannot provide sufficient discriminating information, each input pattern will be given a likelihood of being in the nodes of PCA layers, instead of a strict division. Based on the Expectation-Maximization (EM) algorithm, we introduce three methods to estimate the parameters for this crude classification during training. The experiments on a 110-sign vocabulary show a significant improvement compared with the global PCA/MDA. <s> BIB033 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> The paper presents a portable system and method for recognizing the 26 hand shapes of the American Sign Language alphabet, using a novel glove-like device. Two additional signs, 'space', and 'enter' are added to the alphabet to allow the user to form words or phrases and send them to a speech synthesizer. Since the hand shape for a letter varies from one signer to another, this is a 28-class pattern recognition system. A three-level hierarchical classifier divides the problem into "dispatchers" and "recognizers." After reducing pattern dimension from ten to three, the projection of class distributions onto horizontal planes makes it possible to apply simple linear discrimination in 2D, and Bayes' Rule in those cases where classes had features with overlapped distributions. Twenty-one out of 26 letters were recognized with 100% accuracy; the worst case, letter U, achieved 78%. <s> BIB034 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Inspired by the Defense Advanced Research Projects Agency's (DARPA) previous successes in speech recognition, we introduce a new task for sign language recognition research: a mobile one-way American sign language translator. We argue that such a device should be feasible in the next few years, may provide immediate practical benefits for the deaf community, and leads to a sustainable program of research comparable to early speech recognition efforts. We ground our efforts in a particular scenario, that of a deaf individual seeking an apartment and discuss the system requirements and our interface for this scenario. Finally, we describe initial recognition results of 94% accuracy on a 141 sign vocabulary signed in phrases of fours signs using a one-handed glove-based system and hidden Markov models (HMMs). <s> BIB035 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This work presents a hieraarchical approach to recogniz isolated 3-D hand gesture trajectories for signing exact English (SEE). SEE hand gestures can be periodic as well as non-periodic. We first differentiate between periodic and non-periodic gestures followed by recognition of individual gestures. After periodicity detection, non-periodic trajectories are classified into 8 classes and periodic trajectories are classified into 4 classes. A Polhemus tracker is used to provide the input data. Periodicity detection is based on Fourier analysis and hand trajectories are recognized by vector quantization principal component analysis (VQPCA). The average periodicity detection accuracy is 95.9%. The average recognition rates with VQPCA for non-periodic and periodic gestures are 97.3% and 97.0% respectively. In comparison, k-means clustering yielded 87.0% and 85.1%, respectively. <s> BIB036 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This work discusses an approach for capturing and translating isolated gestures of American Sign Language into spoken and written words. The instrumented part of the system combines an AcceleGlove and a two-link arm skeleton. Gestures of the American Sign Language are broken down into unique sequences of phonemes called poses and movements, recognized by software modules trained and tested independently on volunteers with different hand sizes and signing ability. Recognition rates of independent modules reached up to 100% for 42 postures, orientations, 11 locations and 7 movements using linear classification. The overall sign recognizer was tested using a subset of the American Sign Language dictionary comprised by 30 one-handed signs, achieving 98% accuracy. The system proved to be scalable: when the lexicon was extended to 176 signs and tested without retraining, the accuracy was 95%. This represents an improvement over classification based on hidden Markov models (HMMs) and neural networks (NNs). <s> BIB037
|
Neural Networks and Variants. Multilayer perceptrons (MLP) are often employed for classifying handshape ( BIB005 , BIB018 , BIB017 , BIB001 , BIB013 , BIB004 , BIB024 ). Waldron and Kim BIB004 , and Vamplew and Adams BIB013 additionally used MLPs to classify the hand location, orientation, and movement type from tracker data (see Fig. 5a ). Other neural network (NN) variants include: Fuzzy Min-Max NNs ( BIB002 ) in BIB006 , Adaptive Neuro-Fuzzy Inference System Networks ( BIB003 ) in BIB025 , and Hyperrectangular Composite NNs in BIB019 , all for handshape classification; and 3D Hopfield NN in BIB014 for sign classification. Time-series data, such as movement trajectories and sign gestures, consist of many data points and have variable temporal lengths. NNs designed for classifying static data often do not utilize all the information available in the data points. For example, in classifying movement type, BIB004 used the displacement vectors at the start and midpoint of a gesture as input to the MLP, while BIB013 used only the accumulated displacement in each of the three primary axes of the tracker. Yang et al. BIB029 used Time-Delay NNs which were designed for temporal processing, to classify signs from hand pixel motion trajectories. As a small moving window of gesture data from consecutive time frames is used as input, only a small number of weights need to be trained (in contrast, HMMs often require estimation of many model parameters). The input data window eventually covers all the data points in the sequence, but a standard temporal length is still required. Murakami and Taguchi BIB001 used Recurrent NNs which can take into account temporal context without requiring a fixed temporal length. They considered a sign word to be recognized when the output node values remain unchanged over a heuristically determined period of time. Hidden Markov models (HMMs) and variants. Several works classify sign gestures using HMMs which are widely used in continuous speech recognition. HMMs are able to process time-series data with variable temporal lengths and discount timing variations through the use of skipped-states and same-state transitions. HMMs can also implicitly segment continuous speech into individual words-trained word or phoneme HMMs are chained together into a branching tree-structured network and Viterbi decoding is used to find the most probable path through the network, thereby recovering both the word boundaries and the sequence. This idea has also been used for recognition of continuous signs, using various techniques to increase computational efficiency (some of which originated in speech recognition research ). These techniques include language modeling, beam search and network pruning ( BIB026 , BIB030 , BIB018 , BIB031 ), N-best pass ( BIB031 ), fast matching ( BIB018 ), frame predicting ( BIB018 ), and clustering of Gaussians ( BIB031 ). Language models that have been used include unigram and bigram models in BIB018 , , BIB031 , as well as a strongly constrained parts-of-speech grammar in BIB035 , BIB015 . As an alternative to the tree-structured network approach, Liang and Ouhyoung BIB016 and Fang et al. BIB027 explicitly segmented sentences before classification by HMMs (Section 3.4.1). To reduce training data and enable scaling to large vocabularies, some researchers define sequential subunits, similar to phonetic acoustic models in speech, making every sign a concatentation of HMMs which model subunits. Based on an unsupervised method similar to one employed in speech recognition ( ), Bauer and Kraiss BIB026 defined 10 subunits for a vocabulary of 12 signs using k-means clustering. Later, a bootstrap method BIB030 was introduced to get initial estimates for subunit HMM parameters and obtain the sign transcriptions. Recognition accuracy on 100 isolated signs using 150 HMM subunits was 92.5 percent. Encouragingly, recognition accuracy of 50 new signs without retraining the subunit HMMs was 81.0 percent. Vogler (Fig. 6a) , Yuan et al. BIB032 and Wang et al. BIB031 defined subunits linguistically instead of using unsupervised learning. BIB031 achieved 86.2 percent word accuracy in continuous sign recognition for a large vocabulary of 5,119 signs with 2,439 subunit HMMs. Fig. 6b ( BIB031 ) shows a tree structure built from these subunits to form sign words. Kobayashi and Haruyama BIB009 argue that HMMs, which are meant to model piecewise stationary processes, are illsuited for modeling gesture features which are always transient and propose the Partly hidden Markov model. Here the observation node probability is dependent on two states, one hidden and the other observable. Experimental results for isolated sign recognition showed a 73 percent improvement in error rate over HMMs. However, the vocabulary set of six Japanese Sign Language (JSL) signs is too small to draw concrete conclusions. Principal Component Analysis (PCA) and Multiple Discriminant Analysis (MDA). Birk et al. BIB010 and Imagawa et al. BIB020 both reduced dimensionality of segmented hand images by PCA before classification. Imagawa et al. BIB020 applied an unsupervised approach where training images were clustered in eigenspace and test images were classified to the cluster identity which gave the maximum likelihood score. Kong and Ranganath BIB036 classified 11 3D movement trajectories by performing periodicity detection using Fourier analysis, followed by Vector Quantization Principal Component Analysis BIB011 . Cui and Weng BIB021 used a recursive partition tree and applied PCA and MDA operations at each node. This method was able to achieve nonlinear classification boundaries in the feature space of 28 ASL signs. Deng and Tsui BIB033 found that when the entire data set is used for MDA, the performance degrades with increasing number of classes. To overcome this and to avoid strict division of data into partitions (as in BIB021 ), they applied PCA and then performed crude classification into clusters with Gaussian distributions before applying MDA locally. The final classification of an input vector into one of 110 ASL signs took into account the likelihood of being in each of the clusters. Wu and Huang BIB022 aimed to overcome the difficulty of getting good results from MDA without a large labeled training data set. A small labeled data set and a large unlabeled data set were both modeled by the same mixture density, and a modified Discriminant-EM algorithm was used to estimate the mixture density parameters. A classifier trained with 10,000 unlabeled samples and 140 labeled samples of segmented hand images classified 14 handshapes with 92.4 percent accuracy, including test images where the hands had significant outof-plane rotations. The above works mainly dealt with handshape classification ( BIB010 , BIB022 ) or classification of signs based on just the beginning and ending handshape ( BIB033 , BIB020 ). In BIB036 and BIB021 which classified movement trajectory and signs, respectively, mapping to a fixed temporal length was required. Other methods. Some of the other methods that have been applied for classification of handshape are: decision trees ( BIB034 , BIB037 ), nearest-neighbor matching ( ), image template matching ( BIB028 , BIB007 ), and correlation with phase-only filters from discrete Fourier transforms ( ). Rule-based methods based on dictionary entries or decision trees have also been applied to classifying motion trajectories or signs ( BIB008 , , , BIB006 , BIB012 , ). Classification is by template matching with the ideal sequence of motion directions, or finding features (like concativity, change in direction) that characterize each motion type. The rules are usually handcoded and, thus, may not generalize well. Wu and Gao BIB023 presented the Semicontinuous Dynamic Gaussian Mixture Model as an alternative to HMMs for processing temporal data, with the advantage of faster training time and fewer model parameters. This model was applied to recognizing sign words from a vocabulary of 274, but only using finger joint angle data (from two Cybergloves). They achieved fast recognition (0.04s per sign) and 97.4 percent accuracy.
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> Abstract This paper describes a method of classifying single view deaf-and-mute sign language motion images. We suppose the sign language word is composed of a time sequence of units called cheremes. The chereme is described by handshape, movement, and location of the hand, which can be said to express the 3-D features of the sign language. First, a dictionary for recognizing the sign language is made based on the cheremes. Then, the macro 2-D features of the location of a hand and its movement are extracted from the red component of the input color image sequence. Further, the micro 2-D features of the shape of the hand are also extracted if necessary. The 3-D feature descriptions of the dictionary are converted into 2-D image features, and the input sign language image is classified according to the extracted features of the 2-D image. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> Conventional fuzzy control systems using PID (proportional-integral-derivative) control and their limitations are discussed. Ways to incorporate adaptivity are examined. The functioning of adaptive fuzzy logic and adaptive fuzzy control systems is described. The use of rule weights is explained. > <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> The design and evaluation of a two-stage neural network which can recognize isolated ASL signs is given. The input to this network is the hand shape and position data obtained from a DataGlove mounted with a Polhemus sensor. The first level consists of four backpropagation neural networks which can recognize the sign language phonology, namely, the 36 hand shapes, 10 locations, 11 orientations, and 11 hand movements. The recognized phonemes from the beginning, middle, and end of the sign are fed to the second stage which recognizes the actual signs. Both backpropagation and Kohonen's self-organizing neural work was used to compare the performance and the expandability of the learned vocabulary. In the current work, six signers with differing hand sizes signed 14 signs which included hand shape, position, and motion fragile and triple robust signs. When a backpropagation network was used for the second stage, the results show that the network was able to recognize these signs with an overall accuracy of 86%. Further, the recognition results were linearly dependent on the size of the finger in relation to the metacarpophalangeal joint and the total length of the hand. When the second stage was a Kohonen's self-organizing network, the network could not only recognize the signs with 84% accuracy, but also expand its learned vocabulary through relabeling. > <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> The sign language is a method of communication for the deaf-mute. Articulated gestures and postures of hands and fingers are commonly used for the sign language. This paper presents a system which recognizes the Korean sign language (KSL) and translates into a normal Korean text. A pair of data-gloves are used as the sensing device for detecting motions of hands and fingers. For efficient recognition of gestures and postures, a technique of efficient classification of motions is proposed and a fuzzy min-max neural network is adopted for on-line pattern recognition. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove. Sign language, which is usually known as a set of natural language with formal semantic definitions and syntactic rules, is a large set of hand gestures that are daily used to communicate with the hearing impaired. The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to four parameters in a gesture: posture, position, orientation, and motion. The authors have implemented a prototype system with a lexicon of 250 vocabularies and collected 196 training sentences in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signer-dependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%,. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> A sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> To automatically interpret Japanese sign language (JSL), the recognition of signed words must be more accurate and the effects of extraneous gestures removed. We describe the parameters and the algorithms used to accomplish this. We experimented with 200 JSL sentences and demonstrated that recognition performance could be considerably improved. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> Gesture based applications widely range from replacing the traditional mouse as a position device to virtual reality and communication with the deaf. The article presents a fuzzy rule based approach to spatio-temporal hand gesture recognition. This approach employs a powerful method based on hyperrectangutar composite neural networks (HRCNNs) for selecting templates. Templates for each hand shape are represented in the form of crisp IF-THEN rules that are extracted from the values of synaptic weights of the corresponding trained HRCNNs. Each crisp IF-THEN rule is then fuzzified by employing a special membership function in order to represent the degree to which a pattern is similar to the corresponding antecedent part. When an unknown gesture is to be classified, each sample of the unknown gesture is tested by each fuzzy rule. The accumulated similarity associated with all samples of the input is computed for each hand gesture in the vocabulary, and the unknown gesture is classified as the gesture yielding the highest accumulative similarity. Based on the method we can implement a small-sized dynamic hand gesture recognition system. Two databases which consisted of 90 spatio-temporal hand gestures are utilized for verifying its performance. An encouraging experimental result confirms the effectiveness of the proposed method. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> In this paper, a system designed for helping the deaf to communicate with others is presented. Some useful new ideas are proposed in design and implementation. An algorithm based on geometrical analysis for the purpose of extracting invariant feature to signer position is presented. An ANN–DP combined approach is employed for segmenting subwords automatically from the data stream of sign signals. To tackle the epenthesis movement problem, a DP-based method has been used to obtain the context-dependent models. Some techniques for system implementation are also given, including fast matching, frame prediction and search algorithms. The implemented system is able to recognize continuous large vocabulary Chinese Sign Language. Experiments show that proposed techniques in this paper are efficient on either recognition speed or recognition performance. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> Automatic gesture recognition systems generally require two separate processes: a motion sensing process where some motion features are extracted from the visual input; and a classification process where the features are recognised as gestures. We have developed the Hand Motion Understanding (HMU) system that uses the combination of a 3D model-based hand tracker for motion sensing and an adaptive fuzzy expert system for motion classification. The HMU system understands static and dynamic hand signs of the Australian Sign Language (Auslan). This paper presents the hand tracker that extracts 3D hand configuration data with 21 degrees-of-freedom (DOFs) from a 2D image sequence that is captured from a single viewpoint, with the aid of a colour-coded glove. Then the temporal sequence of 3D hand configurations detected by the tracker is recognised as a sign by an adaptive fuzzy expert system. The HMU system was evaluated with 22 static and dynamic signs. Before training the HMU system achieved 91% recognition, and after training it achieved over 95% recognition. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> This work discusses an approach for capturing and translating isolated gestures of American Sign Language into spoken and written words. The instrumented part of the system combines an AcceleGlove and a two-link arm skeleton. Gestures of the American Sign Language are broken down into unique sequences of phonemes called poses and movements, recognized by software modules trained and tested independently on volunteers with different hand sizes and signing ability. Recognition rates of independent modules reached up to 100% for 42 postures, orientations, 11 locations and 7 movements using linear classification. The overall sign recognizer was tested using a subset of the American Sign Language dictionary comprised by 30 one-handed signs, achieving 98% accuracy. The system proved to be scalable: when the lexicon was extended to 176 signs and tested without retraining, the accuracy was 95%. This represents an improvement over classification based on hidden Markov models (HMMs) and neural networks (NNs). <s> BIB012
|
A common approach is to hand-code the categories of handshape, hand orientation, hand location, and movement type that make up each sign in the vocabulary, forming a lexicon of sign definitions. Classifying the sign label from component-level results is then performed by comparing the ideal lexicon categories with the corresponding recognized components ( BIB012 , BIB007 , BIB004 , BIB008 , BIB009 , BIB001 , BIB005 ). Various methods of performing this matching operation have been implemented; for example, Vamplew and Adams BIB005 employed a nearest-neighbor algorithm with a heuristic distance measure for matching sign word candidates. In Sagawa and Takeuchi BIB008 , the dictionary entries defined the mean and variance (which were learned from training examples) of handshape, orientation, and motion type attributes as well as the degree of overlap in the timing of these components. Candidate sign words were then given a probability score based on the actual values of the component attributes in the input gesture data. In Su BIB009 , scoring was based on an accumulated similarity measure of input handshape data from the first and last 10 sample vectors of a gesture. A major assumption was that signs can be distinguished based on just the starting and ending handshapes. Liang and Ouhyoung BIB006 classified all four gesture components using HMMs. Classification at the sign and sentence level was then accomplished using dynamic programming, taking into account the probability of the handshape, location, orientation, and movement components according to dictionary definitions as well as unigram and bigram probabilities of the sign gestures. Methods based on HMMs include Gao et al. BIB010 , where HMMs model individual sign words while observations of the HMM states correspond to component-level labels for position, orientation, and handshape, which were classified by MLPs. Vogler proposed the Parallel HMM algorithm to model gesture components and recognize continuous signing in sentences. The right hand's shape, movement, and location, along with the left hand's movement and location were represented by separate HMM channels which were trained with relevant data and features. For recognition, individual HMM networks were built in each channel and a modified Viterbi decoding algorithm searched through all the networks in parallel. Path probabilities from each network that went through the same sequence of words were combined (Fig. 5b) . Tanibata et al. proposed a similar scheme where output probabilities from HMMs which model the right and left hand's gesture data were multiplied together for isolated word recognition. Waldron and Kim BIB003 combined component-level results (from handshape, hand location, orientation, and movement type classification) with NNs-experimenting with MLPs as well as Kohonen self-organizing maps. The self-organizing map performed slightly worse than the MLP (83 percent versus 86 percent sign recognition accuracy), but it was possible to relabel the map to recognize new signs without requiring additional training data (experimental results were given for relabeling to accomodate two new signs). In an adaptive fuzzy expert system ( BIB002 ) by Holden and Owens BIB011 , signs were classified based on start and end handshapes and finger motion, using triangular fuzzy membership functions, whose parameters were found from training data. An advantage of decoupling component-level and signlevel classification is that fewer classes would need to be distinguished at the component level. This conforms with the findings of sign linguists that there are a small, limited number of categories in each of the gesture components which can be combined to form a large number of sign words. For example, in Liang and Ouhyoung BIB006 , the most number of classes at the component-level was 51 categories (for handshape), which is smaller than the 71 to 250 sign words that were recognized. Though some of these works may have small vocabularies (e.g., 22 signs in ), their focus, nevertheless, is on developing frameworks that allow scaling to large vocabularies. In general, this approach enables the component-level classifiers to be simpler, and with fewer parameters to be learned, due to the fewer number of classes to be distinguished and to the reduced input dimensions (since only the relevant component features are input to each classifier). In the works where sign-level classification was based on a lexicon of sign definitions, only training data for component-level classification was required and not at the whole-sign level ( BIB012 , BIB004 , BIB006 , BIB009 , BIB001 , BIB005 , ). Furthermore, new signs can be recognized without retraining the component-level classifiers, if they cover all categories of components that may appear in signs. For example, the system in Hernandez-Rebollar et al. BIB012 trained to classify 30 signs, can be expanded to classify 176 new signs by just adding their descriptions into the lexicon. In addition, approaches that do not require any training at the sign-level may be the most suitable for dealing with inflections and other grammatical processes in signing. As described in Section 2.2 and Appendix A (which can be found at www.computer.org/publications/dlib), the citation form of a sign can be systematically modified in one or more of its components to result in an inflected or derived sign form. This increases the vocabulary size to many more times than the number of lexical signs, with a correspondingly increased data requirement if training is required at the sign level. However, there is a limited number of ways in which these grammatical processes occur; hence, much less training data would be required if these processes could be recognized at the component level.
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> We explore recognition implications of understanding gesture communication, having chosen American sign language as an example of a gesture language. An instrumented glove and specially developed software have been used for data collection and labeling. We address the problem of recognizing dynamic signing, i.e. signing performed at natural speed. Two neural network architectures have been used for recognition of different types of finger-spelled sentences. Experimental results are presented suggesting that two features of signing affect recognition accuracy: signing frequency which to a large extent can be accounted for by training a network on the samples of the respective frequency; and coarticulation effect which a network fails to identify. As a possible solution to coarticulation problem two post-processing algorithms for temporal segmentation are proposed and experimentally evaluated. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> Hand gesture segmentation is a difficult problem that must be overcome if gestural interfaces are to be practical. This paper sets out a recognition-led approach that focuses on the actual recognition techniques required for gestural interaction. Within this approach, a holistic view of the gesture input data stream is taken that considers what links the low-level and high-level features of gestural communication. Using this view, a theory is proposed that a state of high hand tension can be used as a gesture segmentation cue for certain classes of gestures. A model of hand tension is developed and then applied successfully to segment two British Sign Language sentence fragments. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> This work presents a design for a human computer interface capable of recognizing 25 gestures from the international hand alphabet in real-time. Principal Component Analysis (PCA) is used to extract features from images of gestures. The features represent gesture images in terms of an optimal coordinate system, in which the classes of gestures make up clusters. The system is divided into two parts: an off-line and an on-line part. The feature selection and generation of a classifier is performed off-line. On-line the obtained features and the classifier are used to classify new and unknown gesture images in real-time. Results show that an overall off-line recognition rate averaging 99% on 1500 images is achieved when trained on 1000 other images. The on-line system runs at 14 frames per second. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove. Sign language, which is usually known as a set of natural language with formal semantic definitions and syntactic rules, is a large set of hand gestures that are daily used to communicate with the hearing impaired. The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to four parameters in a gesture: posture, position, orientation, and motion. The authors have implemented a prototype system with a lexicon of 250 vocabularies and collected 196 training sentences in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signer-dependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%,. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> To automatically interpret Japanese sign language (JSL), the recognition of signed words must be more accurate and the effects of extraneous gestures removed. We describe the parameters and the algorithms used to accomplish this. We experimented with 200 JSL sentences and demonstrated that recognition performance could be considerably improved. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> A divide-and-conquer approach is presented for signer-independent continuous Chinese Sign Language(CSL) recognition in this paper. The problem of continuous CSL recognition is divided into the subproblems of isolated CSL recognition. The simple recurrent network(SRN) and the hidden Markov models(HMM) are combined in this approach. The improved SRN is introduced for segmentation of continuous CSL. Outputs of SRN are regarded as the states of HMM, and the Lattice Viterbi algorithm is employed to search the best word sequence in the HMM framework. Experimental results show SRN/HMM approach has better performance than the standard HMM one. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> In this paper 3-layer feedforward network is introduced to recognize Chinese manual alphabet, and Single Parameter Dynamic Search Algorithm(SPDS) is used to learn net parameters. In addition, a recognition algorithm for recognizing manual alphabets based on multi-features and multi-classifiers is proposed to promote the recognition performance of finger-spelling. From experiment result, it is shown that Chinese finger-spelling recognition based on multi-features and multi-classifiers outperforms its recognition based on single-classifier. <s> BIB007
|
The success of the works reported in the literature should not be measured just in terms of recognition rate but also in terms of how well they deal with the main issues involved in classification of sign gestures. In the following, we consider issues which apply to both vision-based and direct-measure device approaches. For a discussion of imaging environment constraints and restrictions, and feature estimation issues pertaining to vision-based approaches, the reader is referred to Sections 3.1 and 3.2. Tables 2 and 3 reveal that most of the works deal with isolated sign recognition where the user either performs the signs one at a time, starting and ending at a neutral position, or with exaggerated pauses, or while applying an external switch between each word. Extending isolated recognition to continuous signing requires automatic detection of word boundaries so that the recognition algorithm can be applied on the segmented signs. As such, valid sign segments where the movement trajectory, handshape, and orientation are meaningful parts of the sign need to be distinguished from movement epenthesis segments, where the hand(s) are merely transiting from the ending location and hand configuration of one sign to the start of the next sign. The general approach for explicit segmentation uses a subset of features from gesture data as cues for boundary detection. Sagawa and Takeuchi BIB005 considered a minimum in the hand velocity, a minimum in the differential of glove finger flexure values and a large change in motion trajectory angle as candidate points for word boundaries. Transition periods and valid word segments were further distinguished by calculating the ratio between the minimum acceleration value and maximum velocity in the segment-a minimal ratio indicated a word, otherwise a transition. In experiments with 100 JSL sentences, 80.2 percent of the word segments were correctly detected, while 11.2 percent of the transition segments were misjudged as words. In contrast, Liang and Ouhyoung BIB004 considered a sign gesture as consisting of a sequence of handshapes connected by motion and assumed that valid sign words are contained in segments where the timevarying parameters in finger flexure data dropped below a threshold. The handshape, orientation, location, and movement type in these segments were classified, while sections with large finger movement were ignored. The limitation of these methods which use a few gesture features as cues arises from the difficulty in specifying rules for determining sign boundaries that would apply in all instances. For example, BIB005 assumed that sign words are contained in segments where there is significant hand displacement and finger movement while boundary points are characterized by a low value in those parameters. However, in general, this may not always occur at sign boundaries. On the other hand, the method in BIB004 might miss important data for signs that involve a change in handshape co-occuring with a meaningful movement trajectory. A promising approach was proposed in Fang et al. BIB006 where the appropriate features for segmentation cues were automatically learned by a self-organizing map from finger flexure and tracker position data. The self-organizing map output was input to a Recurrent NN, which processed data in temporal context to label data frames as the left boundary, right boundary, or interior of a segment with 98.8 percent accuracy. Transient frames near segment boundaries were assumed to be movement epenthesis and ignored. A few researchers considered segmentation in fingerspelling sequences, where the task is to mark points where valid handshapes occur. Kramer and Leifer and Wu and Gao BIB007 performed handshape recognition during segments where there was a drop in the velocity of glove finger flexure data. Erenshteyn et al. BIB001 extracted segments by low-pass filtering and derivative analysis and discarded transitions and redundant frames by performing recognition only at the midpoint of these segments. Segmentation accuracy was 88-92 percent. Harling and Edwards BIB002 used the sum of finger tension values as a cue-a maximum indicated a valid handshape while a minimum indicated a transition. The finger tension values were calculated as a function of finger-bend values. Birk et al. BIB003 recognized fingerspelling from image sequences and used frame differencing to discard image frames with large motion.
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> This paper is concerned with the video-based recognition of signs. Concentrating on the manual parameters of sign language, the system aims for the signer dependent recognition of 262 different signs taken from Sign Language of the Netherlands. For Hidden Markov Modelling a sign is considered a doubly stochastic process, represented by an unobservable state sequence. The observations emitted by the states are regarded as feature vectors, that are extracted from video frames. This work deals with three topics: Firstly the recognition of isolated signs, secondly the influence of variations of the feature vector on the recognition rate and thirdly an approach for the recognition of connected signs. The system achieves recognition rates up to 94% for isolated signs and 73% for a reduced vocabulary of connected signs. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> In this paper, a system designed for helping the deaf to communicate with others is presented. Some useful new ideas are proposed in design and implementation. An algorithm based on geometrical analysis for the purpose of extracting invariant feature to signer position is presented. An ANN–DP combined approach is employed for segmenting subwords automatically from the data stream of sign signals. To tackle the epenthesis movement problem, a DP-based method has been used to obtain the context-dependent models. Some techniques for system implementation are also given, including fast matching, frame prediction and search algorithms. The implemented system is able to recognize continuous large vocabulary Chinese Sign Language. Experiments show that proposed techniques in this paper are efficient on either recognition speed or recognition performance. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> This paper is concerned with the automatic recognition of German continuous sign language. For the most user-friendliness only one single color video camera is used for image recording. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system design, which are in general based on subunits, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs will be outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits called fenone. K-means algorithm is used for the definition of such fenones. The software prototype of the system is currently evaluated in experiments. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> Hitherto, the major challenge to sign language recognition is how to develop approaches that scale well with increasing vocabulary size. We present an approach to large vocabulary, continuous Chinese sign language (CSL) recognition that uses phonemes instead of whole signs as the basic units. Since the number of phonemes is limited, HMM-based training and recognition of the CSL signal becomes more tractable and has the potential to recognize enlarged vocabularies. Furthermore, the proposed method facilitates the CSL recognition when the finger-alphabet is blended with gestures. About 2400 phonemes are defined for CSL. One HMM is built for each phoneme, and then the signs are encoded based on these phonemes. A decoder that uses a tree-structured network is presented. Clustering of the Gaussians on the states, the language model and N-best-pass is used to improve the performance of the system. Experiments on a 5119 sign vocabulary are carried out, and the result is exciting. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> A new method to recognize continuous sign language based on hidden Markov model is proposed. According to the dependence of linguistic context, connections between elementary subwords are classified as strong connection and weak connection. The recognition of strong connection is accomplished with the aid of subword trees, which describe the connection of subwords in each sign language word. In weak connection, the main problem is how to extract the best matched subwords and find their end-points with little help of context information. The proposed method improves the summing process of the Viterbi decoding algorithm which is constrained in every individual model, and compares the end score at each frame to find the ending frame of a subword. Experimental results show an accuracy of 70% for continuous sign sentences that comprise no more than 4 subwords. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> The major challenges that sign language recognition (SLR) now faces are developing methods that solve large vocabulary continuous sign problems. In this paper, large vocabulary continuous SLR based on transition movement models is proposed. The proposed method employs the temporal clustering algorithm to cluster a large amount of transition movements, and then the corresponding training algorithm is also presented for automatically segmenting and training these transition movement models. The clustered models can improve the generalization of transition movement models, and are very suitable for large vocabulary continuous SLR. At last, the estimated transition movement models, together with sign models, are viewed as candidate models of the Viterbi search algorithm for recognizing continuous sign language. Experiments show that continuous SLR based on transition movement models has good performance over a large vocabulary of 5113 signs. <s> BIB007
|
A popular approach for dealing with continuous signs without explicit segmentation as above is to use HMMs for implicit sentence segmentation (as mentioned in Section 3.3.1). In continuous speech recognition, coarticulation effects due to neighboring phonemes predominantly result in pronunciation variations. This is usually accounted for by modeling sounds in context-for example, triphones model a phoneme in the context of its preceding and succeeding phonemes, thereby tripling the number of HMM models required. The various methods that have been employed in dealing with sign transitions are generally different from the context-dependent models in speech. For example, Starner et al. BIB002 and Bauer and Kraiss BIB004 used one HMM to model each sign word (or subunit, in BIB004 ) and trained the HMMs using data from entire sentences in an embedded training scheme ( ), in order to incorporate variations in sign appearance during continuous signing. This would result in a large variation in the observations of the initial and ending states of a HMM due to the large variations in the appearance of all the possible movement epenthesis that could occur between two signs. This may result in loss of modeling accuracy for valid sign words. Wang et al. ([146] , BIB005 ) used a different approach where they trained HMMs on isolated words and subunits and chained them together only at recognition time, while employing measures to detect and discount possible movement epenthesis frames-signs were assumed to end in still frames, and the following frames were considered to be transition frames. This method of training with isolated sign data would not be able to accomodate processes where the appearance of a sign is affected by its context (e.g., hold deletion). Other works accounted for movement epenthesis by explicitly modeling it. In Assan and Grobel BIB001 , all transitions between signs go through a single state, while in Gao et al. BIB003 separate HMMs model the transitions between each unique pair of signs that occur in sequence (Fig. 7) . In more recent experiments BIB007 , the number of such transition HMMs was reduced by clustering the transition frames. In Vogler , separate HMMs model the transitions between each unique ending and starting location of signs (Fig. 6a) . In BIB003 , BIB007 and , all HMM models are trained on data from entire sentences and, hence, in principle, variations in sign appearance due to context are accounted for. Volger also assessed the advantage of explicit epenthesis modeling by making experimental comparisons with context-independent HMMs (as used in BIB002 , BIB004 ), and context-dependent biphone HMMs (one HMM is trained for every two valid combination of signs). On a test set of 97 sentences constructed from a 53-sign vocabulary, explicit epenthesis modeling was shown to have the best word recognition accuracy (92.1 percent) while context-independent modeling had the worst (87.7 percent versus 89.9 percent for biphone models). Yuan et al. BIB006 used HMMs for continuous sign recognition without employing a language model. They alternated word recognition with movement epenthesis detection. The ending data frame of a word was detected when the attempt to match subsequent frames to the word's last state produced a sharp drop in the probability scores. The next few frames were regarded as movement epenthesis if there was significant movement of a short duration and were discarded. Word recognition accuracy for sentences employing a vocabulary of 40 CSL signs was 70 percent.
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Abstract This paper describes a method of classifying single view deaf-and-mute sign language motion images. We suppose the sign language word is composed of a time sequence of units called cheremes. The chereme is described by handshape, movement, and location of the hand, which can be said to express the 3-D features of the sign language. First, a dictionary for recognizing the sign language is made based on the cheremes. Then, the macro 2-D features of the location of a hand and its movement are extracted from the red component of the input color image sequence. Further, the micro 2-D features of the shape of the hand are also extracted if necessary. The 3-D feature descriptions of the dictionary are converted into 2-D image features, and the input sign language image is classified according to the extracted features of the 2-D image. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> A gesture recognition method for Japanese sign language is presented. We have developed a posture recognition system using neural networks which could recognize a finger alphabet of 42 symbols. We then developed a gesture recognition system where each gesture specifies a word. Gesture recognition is more difficult than posture recognition because it has to handle dynamic processes. To deal with dynamic processes we use a recurrent neural network. Here, we describe a gesture recognition method which can recognize continuous gesture. We then discuss the results of our research. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> In this paper, a framework for maximum a posteriori (MAP) estimation of hidden Markov models (HMM) is presented. Three key issues of MAP estimation, namely, the choice of prior distribution family, the specification of the parameters of prior densities, and the evaluation of the MAP estimates, are addressed. Using HMM's with Gaussian mixture state observation densities as an example, it is assumed that the prior densities for the HMM parameters can be adequately represented as a product of Dirichlet and normal-Wishart densities. The classical maximum likelihood estimation algorithms, namely, the forward-backward algorithm and the segmental k-means algorithm, are expanded, and MAP estimation formulas are developed. Prior density estimation issues are discussed for two classes of applications/spl minus/parameter smoothing and model adaptation/spl minus/and some experimental results are given illustrating the practical interest of this approach. Because of its adaptive nature, Bayesian learning is shown to serve as a unified approach for a wide range of speech recognition applications. > <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> The design and evaluation of a two-stage neural network which can recognize isolated ASL signs is given. The input to this network is the hand shape and position data obtained from a DataGlove mounted with a Polhemus sensor. The first level consists of four backpropagation neural networks which can recognize the sign language phonology, namely, the 36 hand shapes, 10 locations, 11 orientations, and 11 hand movements. The recognized phonemes from the beginning, middle, and end of the sign are fed to the second stage which recognizes the actual signs. Both backpropagation and Kohonen's self-organizing neural work was used to compare the performance and the expandability of the learned vocabulary. In the current work, six signers with differing hand sizes signed 14 signs which included hand shape, position, and motion fragile and triple robust signs. When a backpropagation network was used for the second stage, the results show that the network was able to recognize these signs with an overall accuracy of 86%. Further, the recognition results were linearly dependent on the size of the finger in relation to the metacarpophalangeal joint and the total length of the hand. When the second stage was a Kohonen's self-organizing network, the network could not only recognize the signs with 84% accuracy, but also expand its learned vocabulary through relabeling. > <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> We present a system for recognising hand-gestures in Sign language. The system works in real-time and uses input from a colour video camera. The user wears different coloured gloves on either hand and colour matching is used to distinguish the hands from each other and from the background. So far the system has been tested in fixed lighting conditions, with the camera a fixed distance from the user. The system is user-dependent. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> We describe a video-based analysis system for acquisition and classification of hand-arm motion concerning German sign language. These motions are recorded with a single video camera by use of a modular framegrabber system. Data acquisition as well as motion classification are performed in real-time. A colour coded glove and coloured markers at the elbow and shoulder are used. These markers are segmented from the recorded input images as a first step of image processing. Thereafter features of these coloured areas are calculated which are used for determining the 20 positions for each frame and hence the positions of hand and arm. The missing third dimension is derived from a geometric model of the human hand-arm system. The sequence of the position data is converted into a certain representation of motion. Motion is derived from rule-based classification of the performed gesture, which yields a recognition rate of 95%. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This paper documents the recognition method of deciphering Japanese sign language(JSL) using projected images. The goal of the movement recognition is to foster communication between hearing impaired and people capable of normal speech. We uses a stereo camera for recording three-dimensional movements, a image processing board for tracking movements, and a personal computer for an image processor charting the recognition of JSL patterns. This system works by formalizing tile space area of the signers according to the characteristics of the human body, determining components such as location and movements, and then recognizing sign language patterns. The system is able to recognize JSL by determining the extent of similarities in the sign field, and does so even when vibrations in hand movements occur and when there are differences in body build. We obtained useful results from recognition experiments in 38 different JSL in two signers. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This work presents a design for a human computer interface capable of recognizing 25 gestures from the international hand alphabet in real-time. Principal Component Analysis (PCA) is used to extract features from images of gestures. The features represent gesture images in terms of an optimal coordinate system, in which the classes of gestures make up clusters. The system is divided into two parts: an off-line and an on-line part. The feature selection and generation of a classifier is performed off-line. On-line the obtained features and the classifier are used to classify new and unknown gesture images in real-time. Results show that an overall off-line recognition rate averaging 99% on 1500 images is achieved when trained on 1000 other images. The on-line system runs at 14 frames per second. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> A new pattern matching method, the partly-hidden Markov model, is proposed for gesture recognition. The hidden Markov model, which is widely used for the time series pattern recognition, can deal with only piecewise stationary stochastic process. We solved this problem by introducing the modified second order Markov model, in which the first state is hidden and the second one is observable. As shown by the results of 6 sign-language recognition test, the error rate was improved by 73% compared with normal HMM. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This paper is concerned with the video-based recognition of signs. Concentrating on the manual parameters of sign language, the system aims for the signer dependent recognition of 262 different signs taken from Sign Language of the Netherlands. For Hidden Markov Modelling a sign is considered a doubly stochastic process, represented by an unobservable state sequence. The observations emitted by the states are regarded as feature vectors, that are extracted from video frames. This work deals with three topics: Firstly the recognition of isolated signs, secondly the influence of variations of the feature vector on the recognition rate and thirdly an approach for the recognition of connected signs. The system achieves recognition rates up to 94% for isolated signs and 73% for a reduced vocabulary of connected signs. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove. Sign language, which is usually known as a set of natural language with formal semantic definitions and syntactic rules, is a large set of hand gestures that are daily used to communicate with the hearing impaired. The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to four parameters in a gesture: posture, position, orientation, and motion. The authors have implemented a prototype system with a lexicon of 250 vocabularies and collected 196 training sentences in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signer-dependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%,. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon. <s> BIB012 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures. <s> BIB013 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This paper presents a system for the recognition of sign language based on a theory of shape representation using size functions proposed by P. Frosini [5]. Our system consists of three modules: feature extraction, sign representation and sign recognition. The first performs an edge detection operation, the second uses size functions and inertia moments to represent hand signs, and the last uses a neural network to recognize hand gestures. Sign representation is an important step which we will deal with. Unlike previous work [15, 16], a new approach to the representation of hand gestures is proposed, based on size functions. Each sign is represented by means of a feature vector computed from a new pair of moment-based size functions. The work reported here indicates that moment-based size functions can be effectively used for the recognition of sign language even in the presence of shape changes due to differences in hands, position, style of signing, and viewpoint. <s> BIB014 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> In this paper, a system designed for helping the deaf to communicate with others is presented. Some useful new ideas are proposed in design and implementation. An algorithm based on geometrical analysis for the purpose of extracting invariant feature to signer position is presented. An ANN–DP combined approach is employed for segmenting subwords automatically from the data stream of sign signals. To tackle the epenthesis movement problem, a DP-based method has been used to obtain the context-dependent models. Some techniques for system implementation are also given, including fast matching, frame prediction and search algorithms. The implemented system is able to recognize continuous large vocabulary Chinese Sign Language. Experiments show that proposed techniques in this paper are efficient on either recognition speed or recognition performance. <s> BIB015 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Automatic gesture recognition systems generally require two separate processes: a motion sensing process where some motion features are extracted from the visual input; and a classification process where the features are recognised as gestures. We have developed the Hand Motion Understanding (HMU) system that uses the combination of a 3D model-based hand tracker for motion sensing and an adaptive fuzzy expert system for motion classification. The HMU system understands static and dynamic hand signs of the Australian Sign Language (Auslan). This paper presents the hand tracker that extracts 3D hand configuration data with 21 degrees-of-freedom (DOFs) from a 2D image sequence that is captured from a single viewpoint, with the aid of a colour-coded glove. Then the temporal sequence of 3D hand configurations detected by the tracker is recognised as a sign by an adaptive fuzzy expert system. The HMU system was evaluated with 22 static and dynamic signs. Before training the HMU system achieved 91% recognition, and after training it achieved over 95% recognition. <s> BIB016 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Sign language is the language used by the deaf, which is a comparatively steadier expressive system composed of signs corresponding to postures and motions assisted by facial expression. The objective of sign language recognition research is to "see" the language of deaf. The integration of sign language recognition and sign language synthesis jointly comprise a "human-computer sign language interpreter", which facilitates the interaction between deaf and their surroundings. Considering the speed and performance of the recognition system, Cyberglove is selected as gesture input device in our sign language recognition system, Semi-Continuous Dynamic Gaussian Mixture Model (SCDGMM) is used as recognition technique, and a search scheme based on relative entropy is proposed and is applied to SCDGMM-based sign word recognition. Comparing with SCDGMM recognizer without searching scheme, the recognition time of SCDGMM recognizer with searching scheme reduces almost 15 times. <s> BIB017 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Gesture based applications widely range from replacing the traditional mouse as a position device to virtual reality and communication with the deaf. The article presents a fuzzy rule based approach to spatio-temporal hand gesture recognition. This approach employs a powerful method based on hyperrectangutar composite neural networks (HRCNNs) for selecting templates. Templates for each hand shape are represented in the form of crisp IF-THEN rules that are extracted from the values of synaptic weights of the corresponding trained HRCNNs. Each crisp IF-THEN rule is then fuzzified by employing a special membership function in order to represent the degree to which a pattern is similar to the corresponding antecedent part. When an unknown gesture is to be classified, each sample of the unknown gesture is tested by each fuzzy rule. The accumulated similarity associated with all samples of the input is computed for each hand gesture in the vocabulary, and the unknown gesture is classified as the gesture yielding the highest accumulative similarity. Based on the method we can implement a small-sized dynamic hand gesture recognition system. Two databases which consisted of 90 spatio-temporal hand gestures are utilized for verifying its performance. An encouraging experimental result confirms the effectiveness of the proposed method. <s> BIB018 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> The accurate classification of hand gestures is crucial in the development of novel hand gesture-based systems designed for human-computer interaction (HCI) and for human alternative and augmentative communication (HAAC). A complete vision-based system, consisting of hand gesture acquisition, segmentation, filtering, representation and classification, is developed to robustly classify hand gestures. The algorithms in the subsystems are formulated or selected to optimality classify hand gestures. The gray-scale image of a hand gesture is segmented using a histogram thresholding algorithm. A morphological filtering approach is designed to effectively remove background and object noise in the segmented image. The contour of a gesture is represented by a localized contour sequence whose samples are the perpendicular distances between the contour pixels and the chord connecting the end-points of a window centered on the contour pixels. Gesture similarity is determined by measuring the similarity between the localized contour sequences of the gestures. Linear alignment and nonlinear alignment are developed to measure the similarity between the localized contour sequences. Experiments and evaluations on a subset of American Sign Language (ASL) hand gestures show that, by using nonlinear alignment, no gestures are misclassified by the system. Additionally, it is also estimated that real-time gesture classification is possible through the use of a high-speed PC, high-speed digital signal processing chips and code optimization. <s> BIB019 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This paper introduces a model-based hand gesture recognition system, which consists of three phases: feature extraction, training, and recognition. In the feature extraction phase, a hybrid technique combines the spatial (edge) and the temporal (motion) information of each frame to extract the feature images. Then, in the training phase, we use the principal component analysis (PCA) to characterize spatial shape variations and the hidden Markov models (HMM) to describe the temporal shape variations. A modified Hausdorff distance measurement is also applied to measure the similarity between the feature images and the pre-stored PCA models. The similarity measures are referred to as the possible observations for each frame. Finally, in recognition phase, with the pre-trained PCA models and HMM, we can generate the observation patterns from the input sequences, and then apply the Viterbi algorithm to identify the gesture. In the experiments, we prove that our method can recognize 18 different continuous gestures effectively. <s> BIB020 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Hand gestures play an important role in communication between people during their daily lives. But the extensive use of hand gestures as a mean of communication can be found in sign languages. Sign language is the basic communication method between deaf people. A translator is usually needed when an ordinary person wants to communicate with a deaf one. The work presented in this paper aims at developing a system for automatic translation of gestures of the manual alphabets in the Arabic sign language. In doing so, we have designed a collection of ANFIS networks, each of which is trained to recognize one gesture. Our system does not rely on using any gloves or visual markings to accomplish the recognition job. Instead, it deals with images of bare hands, which allows the user to interact with the system in a natural way. An image of the hand gesture is processed and converted into a set of features that comprises of the lengths of some vectors which are selected to span the fingertips' region. The extracted features are rotation, scale, and translation invariat, which makes the system more flexible. The subtractive clustering algorithm and the least-squares estimator are used to identify the fuzzy inference system, and the training is achieved using the hybrid learning algorithm. Experiments revealed that our system was able to recognize the 30 Arabic manual alphabets with an accuracy of 93.55%. <s> BIB021 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> A divide-and-conquer approach is presented for signer-independent continuous Chinese Sign Language(CSL) recognition in this paper. The problem of continuous CSL recognition is divided into the subproblems of isolated CSL recognition. The simple recurrent network(SRN) and the hidden Markov models(HMM) are combined in this approach. The improved SRN is introduced for segmentation of continuous CSL. Outputs of SRN are regarded as the states of HMM, and the Lattice Viterbi algorithm is employed to search the best word sequence in the HMM framework. Experimental results show SRN/HMM approach has better performance than the standard HMM one. <s> BIB022 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This paper deals with the automatic recognition of German signs. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system designs, which are in general based on phonemes, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs is outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected, as new signs can be added to the vocabulary database without re-training the existing hidden Markov models (HMMs) for subunits. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits. In first experiences a recognition accuracy of 92,5% was achieved for 100 signs, which were previously trained. For 50 new signs an accuracy of 81% was achieved without retraining of subunit-HMMs. <s> BIB023 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Hitherto, the major challenge to sign language recognition is how to develop approaches that scale well with increasing vocabulary size. We present an approach to large vocabulary, continuous Chinese sign language (CSL) recognition that uses phonemes instead of whole signs as the basic units. Since the number of phonemes is limited, HMM-based training and recognition of the CSL signal becomes more tractable and has the potential to recognize enlarged vocabularies. Furthermore, the proposed method facilitates the CSL recognition when the finger-alphabet is blended with gestures. About 2400 phonemes are defined for CSL. One HMM is built for each phoneme, and then the signs are encoded based on these phonemes. A decoder that uses a tree-structured network is presented. Clustering of the Gaussians on the states, the language model and N-best-pass is used to improve the performance of the system. Experiments on a 5119 sign vocabulary are carried out, and the result is exciting. <s> BIB024 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Principle Component Analysis (PCA) and Multiple Discriminant Analysis (MDA) have long been used for the appearance-based hand posture recognition. In this paper, we propose a novel PCA/MDA scheme for hand posture recognition. Unlike other PCA/MDA schemes, the PCA layer acts as a crude classification. Since posture alone cannot provide sufficient discriminating information, each input pattern will be given a likelihood of being in the nodes of PCA layers, instead of a strict division. Based on the Expectation-Maximization (EM) algorithm, we introduce three methods to estimate the parameters for this crude classification during training. The experiments on a 110-sign vocabulary show a significant improvement compared with the global PCA/MDA. <s> BIB025 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> The paper presents a portable system and method for recognizing the 26 hand shapes of the American Sign Language alphabet, using a novel glove-like device. Two additional signs, 'space', and 'enter' are added to the alphabet to allow the user to form words or phrases and send them to a speech synthesizer. Since the hand shape for a letter varies from one signer to another, this is a 28-class pattern recognition system. A three-level hierarchical classifier divides the problem into "dispatchers" and "recognizers." After reducing pattern dimension from ten to three, the projection of class distributions onto horizontal planes makes it possible to apply simple linear discrimination in 2D, and Bayes' Rule in those cases where classes had features with overlapped distributions. Twenty-one out of 26 letters were recognized with 100% accuracy; the worst case, letter U, achieved 78%. <s> BIB026 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Abstract In this paper, we introduce a hand gesture recognition system to recognize continuous gesture before stationary background. The system consists of four modules: a real time hand tracking and extraction, feature extraction, hidden Markov model (HMM) training, and gesture recognition. First, we apply a real-time hand tracking and extraction algorithm to trace the moving hand and extract the hand region, then we use the Fourier descriptor (FD) to characterize spatial features and the motion analysis to characterize the temporal features. We combine the spatial and temporal features of the input image sequence as our feature vector. After having extracted the feature vectors, we apply HMMs to recognize the input gesture. The gesture to be recognized is separately scored against different HMMs. The model with the highest score indicates the corresponding gesture. In the experiments, we have tested our system to recognize 20 different gestures, and the recognizing rate is above 90%. <s> BIB027 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This work presents a hieraarchical approach to recogniz isolated 3-D hand gesture trajectories for signing exact English (SEE). SEE hand gestures can be periodic as well as non-periodic. We first differentiate between periodic and non-periodic gestures followed by recognition of individual gestures. After periodicity detection, non-periodic trajectories are classified into 8 classes and periodic trajectories are classified into 4 classes. A Polhemus tracker is used to provide the input data. Periodicity detection is based on Fourier analysis and hand trajectories are recognized by vector quantization principal component analysis (VQPCA). The average periodicity detection accuracy is 95.9%. The average recognition rates with VQPCA for non-periodic and periodic gestures are 97.3% and 97.0% respectively. In comparison, k-means clustering yielded 87.0% and 85.1%, respectively. <s> BIB028 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Grammatical information conveyed through systematic temporal and spatial movement modifications is an integral aspect of sign language communication. We propose to model these systematic variations as simultaneous channels of information. Classification results at the channel level are output to Bayesian networks which recognize both the basic gesture meaning and the grammatical information (here referred to as layered meanings). With a simulated vocabulary of 6 basic signs and 5 possible layered meanings, test data for eight test subjects was recognized with 85.0% accuracy. We also adapt a system trained on three test subjects to recognize gesture data from a fourth person, based on a small set of adaptation data. We obtained gesture recognition accuracy of 88.5% which is a 75.7% reduction in error rate as compared to the unadopted system. <s> BIB029 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> A Bayesian network is a graphical model that encodes probabilistic relationships among variables of interest. When used in conjunction with statistical techniques, the graphical model has several advantages for data analysis. One, because the model encodes dependencies among all variables, it readily handles situations where some data entries are missing. Two, a Bayesian network can be used to learn causal relationships, and hence can be used to gain understanding about a problem domain and to predict the consequences of intervention. Three, because the model has both a causal and probabilistic semantics, it is an ideal representation for combining prior knowledge (which often comes in causal form) and data. Four, Bayesian statistical methods in conjunction with Bayesian networks offer an efficient and principled approach for avoiding the overfitting of data. In this paper, we discuss methods for constructing Bayesian networks from prior knowledge and summarize Bayesian statistical methods for using data to improve these models. With regard to the latter task, we describe methods for learning both the parameters and structure of a Bayesian network, including techniques for learning with incomplete data. In addition, we relate Bayesian-network methods for learning to techniques for supervised and unsupervised learning. We illustrate the graphical-modeling approach using a real-world case study. <s> BIB030
|
Analogous to speaker independence in speech recognition, an ideal sign recognition system would work "right out of the box," giving good recognition accuracy for signers not represented in the training data set (unregistered signers). Sources of interperson variations that could impact sign recognition accuracy include different personal signing styles, different sign usage due to geographical or social background ( ), and fit of gloves in direct-measure device approaches. In this area, sign recognition lags far behind speech-many works report signer-dependent results where a single signer provided both training and test data ( BIB023 , BIB015 , BIB016 , BIB011 , BIB007 , BIB002 , BIB012 , BIB005 , BIB001 , , , BIB024 , BIB017 ), while other works have only 2 to 10 signers in the training and test set ( BIB008 , BIB025 , BIB019 , BIB014 , BIB026 , BIB006 , , BIB018 , , BIB013 , BIB004 ). The most number of test subjects was 20 in BIB027 , BIB020 , BIB009 and 60 for alphabet handshape recognition in BIB021 . This is still significantly less than the number of test speakers for which good results were reported in speech systems. When the number of signers in the training set is small, results on test data from unregistered signers can be severely degraded. In Kadous , accuracy decreased from an average of 80 percent to 15 percent when the system that was trained on four signers was tested on an unregistered signer. In Assan and Grobel BIB010 , accuracy for training on one signer and testing on a different signer was 51.9 percent compared to 92 percent when the same signer supplied both training and test data. Better results were obtained when data from more signers was used for training. In Vamplew and Adams BIB013 , seven signers provided training data; test data from these same (registered) signers was recognized with 94.2 percent accuracy versus 85.3 percent accuracy for three unregistered signers. Fang et al. BIB022 trained a recognition system for continuous signing on five signers and obtained test data accuracy of 92.1 percent for these signers, compared to 85.0 percent for an unregistered signer. Classification accuracy for unregistered signers is also relatively good when only handshape is considered, perhaps due to less interperson variation as compared to the other gesture components. For example, BIB014 and BIB018 reported 93-96 percent handshape classification accuracy for registered signers versus 85-91 percent accuracy for unregistered signers. Interestingly, Kong and Ranganath BIB028 showed similarly good results for classifying 3D movement trajectories. Test data from six unregistered signers were classified with 91.2 percent accuracy versus 99.7 percent for test data from four registered signers. In speech recognition, performance for a new speaker can be improved by using a small amount of data from the new speaker to adapt a prior trained system without retraining the system from scratch. The equivalent area of signer adaptation is relatively new. Some experimental results were shown in Ong and Ranganath BIB029 where speaker adaptation methods were modified to perform maximum a posteriori estimation BIB003 on component-level classifiers and Bayesian estimation of Bayesian Network parameters BIB030 . This gave 88.5 percent gesture recognition accuracy for test data from a new signer by adapting a system that was previously trained on three other signers -a 75.7 percent reduction in error rate as compared to using the unadapted system.
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> This paper explores the use of local parametrized models of image motion for recovering and recognizing the non-rigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space and time, such models not only accurately model non-rigid facial motions but also provide a concise description of the motion in terms of a small number of parameters. These parameters are intuitively related to the motion of facial features during facial expressions and we show how expressions such as anger, happiness, surprise, fear, disgust and sadness can be recognized from the local parametric motions in the presence of significant head motion. The motion tracking and expression recognition approach performs with high accuracy in extensive laboratory experiments involving 40 subjects as well as in television and movie sequences. > <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> This paper describes a method of real-time facial expression recognition which is based on automatic measurement of the facial features' dimension and the positional relationship between them. The method is composed of two parts, the facial feature extraction using matching techniques and the facial expression recognition using statistics of position and dimension of the features. The method is implemented in an experimental hardware system and the performance is evaluated. The extraction rates of the facial-area, the mouth and the eyes are about 100%, 96% and 90%, respectively, and the recognition rates of facial expression such as normal, angry, surprise, smile and sad expression are 54%, 89%, 86%, 53% and 71%, respectively, for a specific person. The whole processing speed is about 15 frames/second. Finally, we touch on some applications such as man-machine interface, automatic generation of facial graphic animation and sign language translation using facial expression recognition techniques. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> Sign Language is a rich and expressive means of communication used by profoundly deaf people as an alternative to speech. Computer recognition of sign language represents a demanding research objective analogous to recognising continuous speech, but has many more potential applications in areas such as human body motion tracking and analysis, surveillance, video telephony, and non-invasive interfacing with virtual reality applications. In this paper, we survey those aspects of human body motion which are relevant to sign language, and outline an overall system architecture for computer vision-based sign language recognition. We then discuss the initial stages of processing required, and show how recognition of static manual and facial gestures can be used to provide the low-level features from which an integrated multi-channel dynamic gesture recognition system can be constructed. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> Humans detect and interpret faces and facial expressions in a scene with little or no effort. Still, development of an automated system that accomplishes this task is rather difficult. There are several related problems: detection of an image segment as a face, extraction of the facial expression information, and classification of the expression (e.g., in emotion categories). A system that performs these operations accurately and in real time would form a big step in achieving a human-like interaction between man and machine. The paper surveys the past work in solving these problems. The capability of the human visual system with respect to these problems is discussed, too. It is meant to serve as an ultimate goal and a guide for determining recommendations for development of an automatic facial expression analyzer. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> Most automatic facial expression analysis systems try to analyze emotion categories. However, psychologists argue that there is no straightforward way to classify, emotions from facial expressions. Instead, they propose FACS (facial action coding system), a de-facto standard for categorizing facial actions independent from emotional categories. We describe a system that recognizes asymmetric FACS action unit activities and intensities without the use of markers. Facial expression extraction is achieved by difference images that are projected into a sub-space using either PCA or ICA, followed by nearest neighbor classification. Experiments show that this holistic approach achieves a recognition performance comparable to marker-based facial expression analysis systems or human FACS experts for a single-subject database recorded under controlled conditions. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> Abstract This paper discusses our expert system called Integrated System for Facial Expression Recognition (ISFER), which performs recognition and emotional classification of human facial expression from a still full-face image. The system consists of two major parts. The first one is the ISFER Workbench, which forms a framework for hybrid facial feature detection. Multiple feature detection techniques are applied in parallel. The redundant information is used to define unambiguous face geometry containing no missing or highly inaccurate data. The second part of the system is its inference engine called HERCULES, which converts low level face geometry into high level facial actions, and then this into highest level weighted emotion labels. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> This paper describes a vision-based method for recognizing the nonmanual information in Japanese Sign Language (JSL). This new modality information provides grammatical constraints useful for JSL word segmentation and interpretation. Our attention is focused on head motion, the most dominant non-manual information in JSL. We designed an interactive color-modeling scheme for robust face detection. Two video cameras are vertically arranged to take the frontal and profile image of the JSL user, and head motions are classified into eleven patterns. Moment-based feature and statistical motion feature are adopted to represent these motion patterns. Classification of the motion features is performed with linear discrimant analysis method. Initial experimental results show that the method has good recognition rate and can be realized in real-time. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> An apparatus for holding, releasing, and resetting a multiplicity of neutron absorbing balls within a safety assembly of a liquid metal nuclear reactor comprising vertically hinged trap doors resting on the shoulders of a generally cylindrical release valve, the actuation of which disengages the doors, permitting the poison balls above the doors to drop into the core. In the reset mode of operation a platform is raised, lifting the balls from the bottom of the core and swinging the trap doors upward until the balls are above the door hinges. The release valve is reset and the platform is lowered to reset the doors against the valve shoulders. In the disclosed embodiment, the valve is operated by a self-actuated, temperature responsive scram mechanism. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> Non-manual signals (NMS) are grammatical elements in sign languages. They may convey information that reinforces or is additional to the hand signing. NMS are similar to facial expressions except that, unlike spontaneous emotions, NMS are deliberate gestures. This paper explores the use of Independent Component Analysis (ICA) and Gabor wavelet networks (GWNs) for recognising 3 upper face and 3 lower face expressions related to NMS. Independent component analysis and Gabor wavelet networks were compared as representations for these facial signals. Both representations provided good recognition performance. The method of using GWNs with 116 wavelets outperformed ICA (85.3% and 93.3% for upper and lower face respectively, compared to 78.7% and 92% for ICA). However, the GWN method is computationally more expensive. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> An automated system for detection of head movements is described The goal is to label relevant head gestures in video of American Sign Language (ASL) communication. In the system, a 3D head tracker recovers head rotation and translation parameters from monocular video. Relevant head gestures are then detected by analyzing the length and frequency of the motion signal's peaks and valleys. Each parameter is analyzed independently, due to the fact that a number of relevant head movements in ASL are associated with major changes around one rotational axis. No explicit training of the system is necessary Currently, the system can detect "head shakes." In experimental evaluation, classification performance is compared against ground-truth labels obtained from ASL linguists. Initial results are promising, as the system matches the linguists' labels in a significant number of cases. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> Over the last decade, automatic facial expression analysis has become an active research area that finds potential applications in areas such as more engaging human-computer interfaces, talking heads, image retrieval and human emotion analysis. Facial expressions reflect not only emotions, but other mental activities, social interaction and physiological signals. In this survey we introduce the most prominent automatic facial expression analysis methods and systems presented in the literature. Facial motion and deformation extraction approaches as well as classification methods are discussed with respect to issues such as face normalization, facial expression dynamics and facial expression intensity, but also with regard to their robustness towards environmental changes. <s> BIB011
|
Broadly, the main elements of NMS in SL involve facial expressions, head and body pose, and movement. Often body and especially head movements co-occur with facial expressions (e.g., a question is asked by thrusting the head forward while simultaneously raising the eyebrows). The head could also tilt to the side or rotate left/right. This is further complicated by hand gestures being performed on or in front of the face/head region. Thus, tracking of the head is required while it is undergoing rigid motion, with possible out-of-plane rotation and occlusion by hands. Further, the face has to be distinguished from the hands. Recent surveys BIB011 , BIB004 show much research interest in automatic analysis of facial expressions. However, these works generally cannot be directly applied to facial expressions in NMS due to their limited robustness and inability to characterize the temporal evolution of expressions. Most facial expression recognition approaches constrain faces to be fairly stationary and frontal to the camera. On the other hand, works that consider head tracking in less constrained environments do not include facial expression recognition. Black and Yacoob's local parametric model BIB001 is a notable exception-they successfully tracked facial features under significant rigid head motion and out-ofplane rotation and recognized six different expressions of emotions in video sequences. Though facial expressions in NMS involve articulators that include the cheeks, tongue, nose and chin, most local feature-based approaches only consider the mouth, eyes and eyebrows (e.g., BIB001 ). Facial expression has often been analyzed on static images of the peak expression, thereby ignoring the dynamics, timing, and intensity of the expression. This is not a good fit for NMS where different facial expressions are performed sequentially, and sometimes repetitively, evolving over a period of time. Thus, the timing of the expression in relation to the hand gestures produced, as well as the temporal evolution of the expression's intensity need to be determined. There are very few works that measure the intensity of facial expressions or which model the dynamics of expressions (examples of exceptions are BIB001 , BIB005 ). In many works, facial expression recognition is limited to the six basic emotions as defined by Ekman -happiness, sadness, surprise, fear, anger, disgust-plus the neutral expression, which involve the face as a whole. This is too constrained for NMS where the upper and lower face expressions can be considered to be separate, parallel channels of information that carry different grammatical information or semantic meaning . In this respect, the more promising approaches use a mid-level representation of facial action either defined by the researchers themselves ( BIB001 ) or which follow an existing coding scheme (MPEG-4 or Facial Action Coding System ). The recognition results of the mid-level representation code could in turn be used to interpret NMS facial expressions, in a fashion similar to ruled-based approaches which interpret recognized codes as emotion classes BIB001 , BIB006 . A few works that consider facial expression analysis BIB008 , BIB009 , BIB002 , BIB003 and head motion and pose analysis BIB010 , BIB007 in the context of SL are described in Appendix D (www.computer.org/publications/dlib). The body movements and postures involved in NMS generally consists of torso motion (without whole-body movement), for example, body leaning forwards/backwards or turning to the sides. So far, no work has specifically considered recognition of this type of body motion. Although there has been much work done in tracking and recognition of human activities that involve whole body movements, e.g., walking or dancing (as surveyed in ), these approaches may have difficulty in dealing with the subtler body motions exhibited in NMS.
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> This paper describes a method of real-time facial expression recognition which is based on automatic measurement of the facial features' dimension and the positional relationship between them. The method is composed of two parts, the facial feature extraction using matching techniques and the facial expression recognition using statistics of position and dimension of the features. The method is implemented in an experimental hardware system and the performance is evaluated. The extraction rates of the facial-area, the mouth and the eyes are about 100%, 96% and 90%, respectively, and the recognition rates of facial expression such as normal, angry, surprise, smile and sad expression are 54%, 89%, 86%, 53% and 71%, respectively, for a specific person. The whole processing speed is about 15 frames/second. Finally, we touch on some applications such as man-machine interface, automatic generation of facial graphic animation and sign language translation using facial expression recognition techniques. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> Sign Language is a rich and expressive means of communication used by profoundly deaf people as an alternative to speech. Computer recognition of sign language represents a demanding research objective analogous to recognising continuous speech, but has many more potential applications in areas such as human body motion tracking and analysis, surveillance, video telephony, and non-invasive interfacing with virtual reality applications. In this paper, we survey those aspects of human body motion which are relevant to sign language, and outline an overall system architecture for computer vision-based sign language recognition. We then discuss the initial stages of processing required, and show how recognition of static manual and facial gestures can be used to provide the low-level features from which an integrated multi-channel dynamic gesture recognition system can be constructed. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> A person stands in front of a large projection screen on which is shown a checked floor. They say, "Make a table," and a wooden table appears in the middle of the floor."On the table, place a vase," they gesture using a fist relative to palm of their other hand to show the relative location of the vase on the table. A vase appears at the correct location."Next to the table place a chair." A chair appears to the right of the table."Rotate it like this," while rotating their hand causes the chair to turn towards the table."View the scene from this direction," they say while pointing one hand towards the palm of the other. The scene rotates to match their hand orientation.In a matter of moments, a simple scene has been created using natural speech and gesture. The interface of the future? Not at all; Koons, Thorisson and Bolt demonstrated this work in 1992 [23]. Although research such as this has shown the value of combining speech and gesture at the interface, most computer graphics are still being developed with tools no more intuitive than a mouse and keyboard. This need not be the case. Current speech and gesture technologies make multimodal interfaces with combined voice and gesture input easily achievable. There are several commercial versions of continuous dictation software currently available, while tablets and pens are widely supported in graphics applications. However, having this capability doesn't mean that voice and gesture should be added to every modeling package in a haphazard manner. There are numerous issues that must be addressed in order to develop an intuitive interface that uses the strengths of both input modalities.In this article we describe motivations for adding voice and gesture to graphical applications, review previous work showing different ways these modalities may be used and outline some general interface guidelines. Finally, we give an overview of promising areas for future research. Our motivation for writing this is to spur developers to build compelling interfaces that will make speech and gesture as common on the desktop as the keyboard and mouse. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> Keywords: speech Reference EPFL-CONF-82543 Record created on 2006-03-10, modified on 2017-05-10 <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> We present a statistical approach to developing multimodal recognition systems and, in particular, to integrating the posterior probabilities of parallel input signals involved in the multimodal system. We first identify the primary factors that influence multimodal recognition performance by evaluating the multimodal recognition probabilities. We then develop two techniques, an estimate approach and a learning approach, which are designed to optimize accurate recognition during the multimodal integration process. We evaluate these methods using Quickset, a speech/gesture multimodal system, and report evaluation results based on an empirical corpus collected with Quickset. From an architectural perspective, the integration technique presented offers enhanced robustness. It also is premised on more realistic assumptions than previous multimodal systems using semantic fusion. From a methodological standpoint, the evaluation techniques that we describe provide a valuable tool for evaluating multimodal systems. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> This paper describes a vision-based method for recognizing the nonmanual information in Japanese Sign Language (JSL). This new modality information provides grammatical constraints useful for JSL word segmentation and interpretation. Our attention is focused on head motion, the most dominant non-manual information in JSL. We designed an interactive color-modeling scheme for robust face detection. Two video cameras are vertically arranged to take the frontal and profile image of the JSL user, and head motions are classified into eleven patterns. Moment-based feature and statistical motion feature are adopted to represent these motion patterns. Classification of the motion features is performed with linear discrimant analysis method. Initial experimental results show that the method has good recognition rate and can be realized in real-time. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> The parallel multistream model is proposed for integration sign language recognition and lip motion. The different time scales existing in sign language and lip motion can be tackled well using this approach. Primary experimental results have shown that this approach is efficient for integration of sign language recognition and lip motion. The promising results indicated that parallel multistream model can be a good solution in the framework of multimodal data fusion. An approach to recognize sign language with scalability with the size of vocabulary and a fast approach to locate lip corners are also proposed in this paper. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> Non-manual signals (NMS) are grammatical elements in sign languages. They may convey information that reinforces or is additional to the hand signing. NMS are similar to facial expressions except that, unlike spontaneous emotions, NMS are deliberate gestures. This paper explores the use of Independent Component Analysis (ICA) and Gabor wavelet networks (GWNs) for recognising 3 upper face and 3 lower face expressions related to NMS. Independent component analysis and Gabor wavelet networks were compared as representations for these facial signals. Both representations provided good recognition performance. The method of using GWNs with 116 wavelets outperformed ICA (85.3% and 93.3% for upper and lower face respectively, compared to 78.7% and 92% for ICA). However, the GWN method is computationally more expensive. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> An automated system for detection of head movements is described The goal is to label relevant head gestures in video of American Sign Language (ASL) communication. In the system, a 3D head tracker recovers head rotation and translation parameters from monocular video. Relevant head gestures are then detected by analyzing the length and frequency of the motion signal's peaks and valleys. Each parameter is analyzed independently, due to the fact that a number of relevant head movements in ASL are associated with major changes around one rotational axis. No explicit training of the system is necessary Currently, the system can detect "head shakes." In experimental evaluation, classification performance is compared against ground-truth labels obtained from ASL linguists. Initial results are promising, as the system matches the linguists' labels in a significant number of cases. <s> BIB009
|
Results from the analysis of NMS need to be integrated with recognition results of the hand gestures in order to extract all the information expressed. Our search for works in automatic NMS analysis revealed none that capture the information from all the nonmanual cues of facial expression, head and body posture and movement. Some classify facial expression only BIB008 , BIB001 , BIB002 , while others classify head movement only BIB009 , BIB006 . Of these, there are only a couple of works which consider combining information extracted from nonmanual cues with results of gesture recognition. Ma et al. BIB007 modeled features extracted from lip motion and hand gestures with separate HMM channels using a modified version of Bourlard's multistream model BIB004 and resembling Vogler's Parallel HMM . Viterbi scores from each channel are combined at sign boundaries where synchronization occurs. The different time scales of hand gestures and lip motion were accounted for by having different number of states for the same phrase/sign in each channel. In experiments where the lip motion expressed the same word (in spoken Chinese) as the gestured sign, 9 out of 10 phrases which were incorrectly recognized with hand gesture modeling alone, were correctly recognized when lip motion was also modeled. There are several issues involved in integrating information from NMS with sign gesture recognition. In BIB007 , the assumption was that each phrase uttered by the lips coincides with a sign/phrase in the gesture. However, in general NMS may co-occur with one or more signs/ phrases, and hence a method for dealing with the different time scales in such cases is required. Also, in BIB007 , the lip motion and hand gesturing convey identical information, while in general, NMS convey independent information, and the recognition results of NMS may not always serve to disambiguate results of hand gesture recognition. In fact, NMS often independently convey information in multiple channels through upper and lower face expressions, and head and body movements. Multiple cameras may be required to capture the torso's movement and still obtain good resolution images of the face for facial expression analysis. While some of the schemes employed in general multimodal integration research might be useful for application to this domain, we note that most of these schemes involve at most two channels of information, one of which is generally speech/voice ( BIB003 , BIB005 ). It remains to be seen whether these can be applied to the multiple channels of information conveyed by NMS and hand gesturing in SL.
|
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Abstract This paper describes a method of classifying single view deaf-and-mute sign language motion images. We suppose the sign language word is composed of a time sequence of units called cheremes. The chereme is described by handshape, movement, and location of the hand, which can be said to express the 3-D features of the sign language. First, a dictionary for recognizing the sign language is made based on the cheremes. Then, the macro 2-D features of the location of a hand and its movement are extracted from the red component of the input color image sequence. Further, the micro 2-D features of the shape of the hand are also extracted if necessary. The 3-D feature descriptions of the dictionary are converted into 2-D image features, and the input sign language image is classified according to the extracted features of the 2-D image. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> A gesture recognition method for Japanese sign language is presented. We have developed a posture recognition system using neural networks which could recognize a finger alphabet of 42 symbols. We then developed a gesture recognition system where each gesture specifies a word. Gesture recognition is more difficult than posture recognition because it has to handle dynamic processes. To deal with dynamic processes we use a recurrent neural network. Here, we describe a gesture recognition method which can recognize continuous gesture. We then discuss the results of our research. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> The design and evaluation of a two-stage neural network which can recognize isolated ASL signs is given. The input to this network is the hand shape and position data obtained from a DataGlove mounted with a Polhemus sensor. The first level consists of four backpropagation neural networks which can recognize the sign language phonology, namely, the 36 hand shapes, 10 locations, 11 orientations, and 11 hand movements. The recognized phonemes from the beginning, middle, and end of the sign are fed to the second stage which recognizes the actual signs. Both backpropagation and Kohonen's self-organizing neural work was used to compare the performance and the expandability of the learned vocabulary. In the current work, six signers with differing hand sizes signed 14 signs which included hand shape, position, and motion fragile and triple robust signs. When a backpropagation network was used for the second stage, the results show that the network was able to recognize these signs with an overall accuracy of 86%. Further, the recognition results were linearly dependent on the size of the finger in relation to the metacarpophalangeal joint and the total length of the hand. When the second stage was a Kohonen's self-organizing network, the network could not only recognize the signs with 84% accuracy, but also expand its learned vocabulary through relabeling. > <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> A new pattern matching method, the partly-hidden Markov model, is proposed for gesture recognition. The hidden Markov model, which is widely used for the time series pattern recognition, can deal with only piecewise stationary stochastic process. We solved this problem by introducing the modified second order Markov model, in which the first state is hidden and the second one is observable. As shown by the results of 6 sign-language recognition test, the error rate was improved by 73% compared with normal HMM. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper documents the recognition method of deciphering Japanese sign language(JSL) using projected images. The goal of the movement recognition is to foster communication between hearing impaired and people capable of normal speech. We uses a stereo camera for recording three-dimensional movements, a image processing board for tracking movements, and a personal computer for an image processor charting the recognition of JSL patterns. This system works by formalizing tile space area of the signers according to the characteristics of the human body, determining components such as location and movements, and then recognizing sign language patterns. The system is able to recognize JSL by determining the extent of similarities in the sign field, and does so even when vibrations in hand movements occur and when there are differences in body build. We obtained useful results from recognition experiments in 38 different JSL in two signers. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper is concerned with the video-based recognition of signs. Concentrating on the manual parameters of sign language, the system aims for the signer dependent recognition of 262 different signs taken from Sign Language of the Netherlands. For Hidden Markov Modelling a sign is considered a doubly stochastic process, represented by an unobservable state sequence. The observations emitted by the states are regarded as feature vectors, that are extracted from video frames. This work deals with three topics: Firstly the recognition of isolated signs, secondly the influence of variations of the feature vector on the recognition rate and thirdly an approach for the recognition of connected signs. The system achieves recognition rates up to 94% for isolated signs and 73% for a reduced vocabulary of connected signs. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper describes the development of a corpus or database of hand-arm pointing gestures, considered as a basic element for gestural communication. The structure of the corpus is defined for natural pointing movements carried out in different directions, heights and amplitudes. It is then extended to movement primitives habitually used in sign language communication. The corpus is based on movements recorded using an optoelectronic recording system that allows the 3D description of movement trajectories in space. The main technical characteristics of the capture and pretreatment system are presented, and perspectives are highlighted for recognition and generation purposes. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> The automatic recognition of sign language is an attractive prospect; the technology exists to make it possible, while the potential applications are exciting and worthwhile. To date the research emphasis has been on the capture and classification of the gestures of sign language and progress in that work is reported. However, it is suggested that there are some greater, broader research questions to be addressed before full sign language recognition is achieved. The main areas to be addressed are sign language representation (grammars) and facial expression recognition. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper presents a sign language recognition system which consists of three modules: model-based hand tracking, feature extraction, and gesture recognition using a 3D Hopfield neural network (HNN). The first one uses the Hausdorff distance measure to track shape-variant hand motion, the second one applies the scale and rotation-invariant Fourier descriptor to characterize hand figures, and the last one performs a graph matching between the input gesture model and the stored models by using a 3D modified HNN to recognize the gesture. Our system tests 15 different hand gestures. The experimental results show that our system can achieve above 91% recognition rate, and the recognition process time is about 10 s. The major contribution in this paper is that we propose a 3D modified HNN for gesture recognition which is more reliable than the conventional methods. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove. Sign language, which is usually known as a set of natural language with formal semantic definitions and syntactic rules, is a large set of hand gestures that are daily used to communicate with the hearing impaired. The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to four parameters in a gesture: posture, position, orientation, and motion. The authors have implemented a prototype system with a lexicon of 250 vocabularies and collected 196 training sentences in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signer-dependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%,. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> The paper describes a real-time system which tracks the uncovered/unmarked hands of a person performing sign language. It extracts the face and hand regions using their skin colors, computes blobs and then tracks the location of each hand using a Kalman filter. The system has been tested for hand tracking using actual sign-language motion by native signers. The experimental results indicate that the system is capable of tracking hands even while they are overlapping the face. <s> BIB012 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Apparatus and method are provided for automatically loading a desired quantity of flat, flexible bags into dispensing cartons which comprises, in combination, a stacker mounted adjacent to and aligned with a completed bag dispenser, said stacker having vibration means for automatically collecting a desired number of completed bags in an aligned stack, means for conveying said aligned stack of bags to a carton loading station, means for depositing a stiffening member over a predetermined portion of said aligned stack of bags, means for folding said stack of bags, means for conveying an empty carton from a continuous conveyor of empty cartons to said carton loading station, and means for automatically folding and stuffing said stack of bags into an empty carton at said carton loading station. <s> BIB013 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> The parallel multistream model is proposed for integration sign language recognition and lip motion. The different time scales existing in sign language and lip motion can be tackled well using this approach. Primary experimental results have shown that this approach is efficient for integration of sign language recognition and lip motion. The promising results indicated that parallel multistream model can be a good solution in the framework of multimodal data fusion. An approach to recognize sign language with scalability with the size of vocabulary and a fast approach to locate lip corners are also proposed in this paper. <s> BIB014 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Abstract In this paper, we present a new approach to recognizing hand signs. In this approach, motion recognition (the hand movement) is tightly coupled with spatial recognition (hand shape). The system uses multiclass, multidimensional discriminant analysis to automatically select the most discriminating linear features for gesture classification. A recursive partition tree approximator is proposed to do classification. This approach combined with our previous work on hand segmentation forms a new framework which addresses the three key aspects of hand sign interpretation: hand shape, location, and movement. The framework has been tested to recognize 28 different hand signs. The experimental results show that the system achieved a 93.2% recognition rate for test sequences that had not been used in the training phase. It is shown that our approach provide performance better than that of nearest neighbor classification in the eigensubspace. <s> BIB015 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Automatic gesture recognition systems generally require two separate processes: a motion sensing process where some motion features are extracted from the visual input; and a classification process where the features are recognised as gestures. We have developed the Hand Motion Understanding (HMU) system that uses the combination of a 3D model-based hand tracker for motion sensing and an adaptive fuzzy expert system for motion classification. The HMU system understands static and dynamic hand signs of the Australian Sign Language (Auslan). This paper presents the hand tracker that extracts 3D hand configuration data with 21 degrees-of-freedom (DOFs) from a 2D image sequence that is captured from a single viewpoint, with the aid of a colour-coded glove. Then the temporal sequence of 3D hand configurations detected by the tracker is recognised as a sign by an adaptive fuzzy expert system. The HMU system was evaluated with 22 static and dynamic signs. Before training the HMU system achieved 91% recognition, and after training it achieved over 95% recognition. <s> BIB016 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Gesture based applications widely range from replacing the traditional mouse as a position device to virtual reality and communication with the deaf. The article presents a fuzzy rule based approach to spatio-temporal hand gesture recognition. This approach employs a powerful method based on hyperrectangutar composite neural networks (HRCNNs) for selecting templates. Templates for each hand shape are represented in the form of crisp IF-THEN rules that are extracted from the values of synaptic weights of the corresponding trained HRCNNs. Each crisp IF-THEN rule is then fuzzified by employing a special membership function in order to represent the degree to which a pattern is similar to the corresponding antecedent part. When an unknown gesture is to be classified, each sample of the unknown gesture is tested by each fuzzy rule. The accumulated similarity associated with all samples of the input is computed for each hand gesture in the vocabulary, and the unknown gesture is classified as the gesture yielding the highest accumulative similarity. Based on the method we can implement a small-sized dynamic hand gesture recognition system. Two databases which consisted of 90 spatio-temporal hand gestures are utilized for verifying its performance. An encouraging experimental result confirms the effectiveness of the proposed method. <s> BIB017 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Sign language is the language used by the deaf, which is a comparatively steadier expressive system composed of signs corresponding to postures and motions assisted by facial expression. The objective of sign language recognition research is to "see" the language of deaf. The integration of sign language recognition and sign language synthesis jointly comprise a "human-computer sign language interpreter", which facilitates the interaction between deaf and their surroundings. Considering the speed and performance of the recognition system, Cyberglove is selected as gesture input device in our sign language recognition system, Semi-Continuous Dynamic Gaussian Mixture Model (SCDGMM) is used as recognition technique, and a search scheme based on relative entropy is proposed and is applied to SCDGMM-based sign word recognition. Comparing with SCDGMM recognizer without searching scheme, the recognition time of SCDGMM recognizer with searching scheme reduces almost 15 times. <s> BIB018 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Since the human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research on view-independent object recognition. Due to the difficulties of the model-based approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of a small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set of predefined gesture commands, and it is also extended to hand detection. This algorithm can also apply to other object recognition tasks. <s> BIB019 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Human motion recognition has many important applications, such as improved human-computer interaction and surveillance. A big problem that plagues this research area is that human movements can be very complex. Managing this complexity is difficult. We turn to American sign language (ASL) recognition to identify general methods that reduce the complexity of human motion recognition. We present a framework for continuous 3D ASL recognition based on linguistic principles, especially the phonology of ASL. This framework is based on parallel hidden Markov models (HMMs), which are able to capture both the sequential and the simultaneous aspects of the language. Each HMM is based on a single phoneme of ASL. Because the phonemes are limited in number, as opposed to the virtually unlimited number of signs that can be composed from them, we expect this framework to scale well to larger applications. We then demonstrate the general applicability of this framework to other human motion recognition tasks by extending it to gait recognition. <s> BIB020 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper is concerned with the automatic recognition of German continuous sign language. For the most user-friendliness only one single color video camera is used for image recording. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system design, which are in general based on subunits, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs will be outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits called fenone. K-means algorithm is used for the definition of such fenones. The software prototype of the system is currently evaluated in experiments. <s> BIB021 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper introduces a model-based hand gesture recognition system, which consists of three phases: feature extraction, training, and recognition. In the feature extraction phase, a hybrid technique combines the spatial (edge) and the temporal (motion) information of each frame to extract the feature images. Then, in the training phase, we use the principal component analysis (PCA) to characterize spatial shape variations and the hidden Markov models (HMM) to describe the temporal shape variations. A modified Hausdorff distance measurement is also applied to measure the similarity between the feature images and the pre-stored PCA models. The similarity measures are referred to as the possible observations for each frame. Finally, in recognition phase, with the pre-trained PCA models and HMM, we can generate the observation patterns from the input sequences, and then apply the Viterbi algorithm to identify the gesture. In the experiments, we prove that our method can recognize 18 different continuous gestures effectively. <s> BIB022 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> The aim of this paper is to raise the ethical problems which appear when hearing computer scientists work on the Sign Languages (SL) used by the deaf communities, specially in the field of Sign Language recognition. On one hand, the problematic history of institutionalised SL must be known. On the other hand, the linguistic properties of SL must be learned by computer scientists before trying to design systems with the aim to automatically translate SL into oral or written language or vice-versa. The way oral language and SL function is so different that it seems impossible to work on that topic without a close collaboration with linguists specialised in SL and deaf people. <s> BIB023 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Research on recognition and generation of signed languages and the gestural component of spoken languages has been held back by the unavailability of large-scale linguistically annotated corpora of the kind that led to significant advances in the area of spoken language. A major obstacle has been the lack of computational tools to assist in efficient analysis and transcription of visual language data. Here we describe SignStream, a computer program that we have designed to facilitate transcription and linguistic analysis of visual language. Machine vision methods to assist linguists in detailed annotation of gestures of the head, face, hands, and body are being developed. We have been using SignStream to analyze data from native signers of American Sign Language (ASL) collected in our new video collection facility, equipped with multiple synchronized digital video cameras. The video data and associated linguistic annotations are being made publicly available in multiple formats. <s> BIB024 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> To describe non-manual signals (NMS's) of Japanese Sign Language (JSL), we have developed the notational system sIGNDEX. The notation describes both JSL words and NMS's. We specify characteristics of sIGNDEX in detail. We have also made a linguistic corpus that contains 100 JSL utterances. We show how sIGNDEX successfully describes not only manual signs but also NMS's that appear in the corpus. Using the results of the descriptions, we conducted statistical analyses of NMS's, which provide us with intriguing facts about frequencies and correlations of NMS's. <s> BIB025 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper deals with the automatic recognition of German signs. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system designs, which are in general based on phonemes, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs is outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected, as new signs can be added to the vocabulary database without re-training the existing hidden Markov models (HMMs) for subunits. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits. In first experiences a recognition accuracy of 92,5% was achieved for 100 signs, which were previously trained. For 50 new signs an accuracy of 81% was achieved without retraining of subunit-HMMs. <s> BIB026 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Hitherto, the major challenge to sign language recognition is how to develop approaches that scale well with increasing vocabulary size. We present an approach to large vocabulary, continuous Chinese sign language (CSL) recognition that uses phonemes instead of whole signs as the basic units. Since the number of phonemes is limited, HMM-based training and recognition of the CSL signal becomes more tractable and has the potential to recognize enlarged vocabularies. Furthermore, the proposed method facilitates the CSL recognition when the finger-alphabet is blended with gestures. About 2400 phonemes are defined for CSL. One HMM is built for each phoneme, and then the signs are encoded based on these phonemes. A decoder that uses a tree-structured network is presented. Clustering of the Gaussians on the states, the language model and N-best-pass is used to improve the performance of the system. Experiments on a 5119 sign vocabulary are carried out, and the result is exciting. <s> BIB027 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> A new method to recognize continuous sign language based on hidden Markov model is proposed. According to the dependence of linguistic context, connections between elementary subwords are classified as strong connection and weak connection. The recognition of strong connection is accomplished with the aid of subword trees, which describe the connection of subwords in each sign language word. In weak connection, the main problem is how to extract the best matched subwords and find their end-points with little help of context information. The proposed method improves the summing process of the Viterbi decoding algorithm which is constrained in every individual model, and compares the end score at each frame to find the ending frame of a subword. Experimental results show an accuracy of 70% for continuous sign sentences that comprise no more than 4 subwords. <s> BIB028 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Principle Component Analysis (PCA) and Multiple Discriminant Analysis (MDA) have long been used for the appearance-based hand posture recognition. In this paper, we propose a novel PCA/MDA scheme for hand posture recognition. Unlike other PCA/MDA schemes, the PCA layer acts as a crude classification. Since posture alone cannot provide sufficient discriminating information, each input pattern will be given a likelihood of being in the nodes of PCA layers, instead of a strict division. Based on the Expectation-Maximization (EM) algorithm, we introduce three methods to estimate the parameters for this crude classification during training. The experiments on a 110-sign vocabulary show a significant improvement compared with the global PCA/MDA. <s> BIB029 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> We present an algorithm for extracting and classifying two-dimensional motion in an image sequence based on motion trajectories. First, a multiscale segmentation is performed to generate homogeneous regions in each frame. Regions between consecutive frames are then matched to obtain two-view correspondences. Affine transformations are computed from each pair of corresponding regions to define pixel matches. Pixels matches over consecutive image pairs are concatenated to obtain pixel-level motion trajectories across the image sequence. Motion patterns are learned from the extracted trajectories using a time-delay neural network. We apply the proposed method to recognize 40 hand gestures of American Sign Language. Experimental results show that motion patterns of hand gestures can be extracted and recognized accurately using motion trajectories. <s> BIB030 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Sign language is characterized by its interactivity and multimodality, which cause difficulties in data collection and annotation. To address these difficulties, we have developed a video-based Japanese sign language (JSL) corpus and a corpus tool for annotation and linguistic analysis. As the first step of linguistic annotation, we transcribed manual signs expressing lexical information as well as non-manual signs (NMSs) including head movements, facial actions, and posture that are used to express grammatical information. Our purpose is to extract grammatical rules from this corpus for the sign-language translation system underdevelopment. From this viewpoint, we will discuss methods for collecting elicited data, annotation required for grammatical analysis, as well as corpus tool required for annotation and grammatical analysis. As the result of annotating 2800 utterances, we confirmed that there are at least 50 kinds of NMSs in JSL, using head (seven kinds), jaw (six kinds), mouth (18 kinds), cheeks (one kind), eyebrows (four kinds), eyes (seven kinds), eye gaze (two kinds), bydy posture (five kinds). We use this corpus for designing and testing an algorithm and grammatical rules for the sign-language translation system underdevelopment. <s> BIB031 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> The aim of this paper is to specify some of the problems raised by the design of a gesture recognition system dedicated to Sign Language, and to propose suited solutions. The three topics considered here concern the simultaneity of information conveyed by manual signs, the possible temporal or spatial synchronicity between the two hands, and the different classes of signs that may be encountered in a Sign Language sentence. <s> BIB032 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Abstract In this paper, we introduce a hand gesture recognition system to recognize continuous gesture before stationary background. The system consists of four modules: a real time hand tracking and extraction, feature extraction, hidden Markov model (HMM) training, and gesture recognition. First, we apply a real-time hand tracking and extraction algorithm to trace the moving hand and extract the hand region, then we use the Fourier descriptor (FD) to characterize spatial features and the motion analysis to characterize the temporal features. We combine the spatial and temporal features of the input image sequence as our feature vector. After having extracted the feature vectors, we apply HMMs to recognize the input gesture. The gesture to be recognized is separately scored against different HMMs. The model with the highest score indicates the corresponding gesture. In the experiments, we have tested our system to recognize 20 different gestures, and the recognizing rate is above 90%. <s> BIB033 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> We build upon a constrained, lab-based Sign Languagerecognition system with the goal of making it a mobile assistivetechnology. We examine using multiple sensors for disambiguationof noisy data to improve recognition accuracy.Our experiment compares the results of training a smallgesture vocabulary using noisy vision data, accelerometerdata and both data sets combined. <s> BIB034 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This work discusses an approach for capturing and translating isolated gestures of American Sign Language into spoken and written words. The instrumented part of the system combines an AcceleGlove and a two-link arm skeleton. Gestures of the American Sign Language are broken down into unique sequences of phonemes called poses and movements, recognized by software modules trained and tested independently on volunteers with different hand sizes and signing ability. Recognition rates of independent modules reached up to 100% for 42 postures, orientations, 11 locations and 7 movements using linear classification. The overall sign recognizer was tested using a subset of the American Sign Language dictionary comprised by 30 one-handed signs, achieving 98% accuracy. The system proved to be scalable: when the lexicon was extended to 176 signs and tested without retraining, the accuracy was 95%. This represents an improvement over classification based on hidden Markov models (HMMs) and neural networks (NNs). <s> BIB035
|
In the Gesture Workshop of 1997, Edwards BIB008 identified two aspects of SL communication that had often been overlooked by researchers-facial expression and the use of space and spatial relationships in signing, especially with regard to classifier signs. In the ensuing period, although there has been some work to tackle these aspects, the focus of research continues to be elsewhere and hence progress has been limited. Among the facial expression recognition works surveyed, none were capable of recognizing and interpreting upper face and lower face expressions from video sequences, while simultaneously modeling the dynamics and intensity of expressions. A few works recognize head movements, particularly nods and shakes, but none interpret the body movements in NMS. Apart from BIB014 which sought to improve sign gesture recognition results by combining with lip reading, we are not aware of other work reporting results of integrating NMS and hand gestures. Works that interpret sign gestures whose form and manner of movement convey grammatical information mostly focused on spatial variations of the sign's movement. None of the works surveyed gave experimental results for intepretation of the mimetic classifier signs mentioned by Edwards BIB008 and Bossard et al. BIB032 , . It is obvious from the discussion in Section 3.4.2 that this aspect of signing has not received attention. Current systems that only consider the citation form of signs would miss important information conveyed in natural signing, such as movement dynamics that convey temporal aspect and spatial variations that convey subject-object agreement. Worse still, since current systems do not account for spatial relationships between signs, some signs would be completely undecipherable, for example classifier signs that describe spatial relationships between objects, or signs that point to a location that had previously been established as a referrant position. Noun-verb pairs like SEAT and SIT would be confused since the only difference between them is in the repetitive motion of the noun. Two issues that have received much attention are recognition of continuous signing in sentences (Section 3.4.1) and scaling to large sign vocabularies. To handle large vocabularies with limited training data, some researchers used the idea of sequential subunits ( BIB021 , BIB026 , BIB027 , BIB028 ), while others decomposed a sign gesture into its simultaneous components (Table 3) . Notably, Vogler did both-sign gestures were modeled as simultaneous, parallel channels of information which were each in turn modeled with sequential subunits. The largest vocabulary reported in experiments was 5,119 CSL signs in Wang et al. BIB027 . In contrast, many of the other works are limited in the vocabulary size they can handle due to only using a subset of the information necessary for recognizing a comprehensive vocabulary. For example, it is common for input data to be from one hand only ( , BIB033 , BIB015 , BIB029 , BIB035 , BIB016 , BIB009 , BIB022 , , BIB004 , BIB010 , BIB002 , BIB001 , BIB011 , BIB003 ). Matsuo et al. BIB005 and Yang et al. BIB030 used input from both hands but only measured position and motion data. A few of the works used only hand appearance features as input without any position or orientation data ( BIB017 , BIB018 , BIB029 , BIB016 , BIB022 ). Even though all these works reported good results for sign recognition (possibly arising from either choice of vocabulary or some inherent information redundancy in gesture components), the existence of minimal sign pairs means that recognition of a comprehensive sign vocabulary is not possible without input from all the gesture components. From Tables 2 and 3 , we see that vision-based approaches have tended to experiment with smaller vocabulary sizes as compared to direct-measure device approaches. The largest vocabulary size used was 262 in the recognition of isolated signs of the Netherlands SL BIB006 . This could be due to the difficulty in simultaneously extracting whole hand movement features and detailed hand appearance features from images. Most works that localize and track hand movement, extract gross local features derived from the hand silhouette or contour. Thus, they may not be able to properly distinguish handshape and 3D hand orientation. Furthermore, handshape classification from multiple viewpoints is very difficult to achieve-Wu and Huang BIB019 being one of the few to do so, although on a limited number (14) of handshapes. Many of the vision-based approaches achieved fairly good recognition results but at the expense of very restrictive image capture environments and, hence, robustness is a real problem. An interesting direction to overcome this limitation was taken in the wearable system of Brashear et al. BIB034 , where features from both vision and accelerometer data were used to classify signs. Signing was done in relatively unconstrained environments, i.e., while the signer was moving about in natural everyday settings. Continuous sentences constructed from a vocabulary of five signs were recognized with 90.5 percent accuracy, an improvement over using vision only data (52.4 percent) and accelerometer only data (65.9 percent). Low accuracy and precision in direct-measure devices can also affect recognition rate, a possibility in Kadous as PowerGloves which have coarse sensing were used. At present, it is difficult to directly compare recognition results reported in the literature. Factors that could influence results include restrictions on vocabulary (to avoid minimal pairs or signs performed near the face), slower than normal signing speed, and unnatural signing to avoid occlusion. Unfortunately, this kind of experimental information is usually not reported. Another important issue is that very few systems have used data from native signers. Some exceptions are Imagawa et al. BIB012 and Tamura and Kawasaki BIB001 . Tanibata et al. used a professional interpreter. Braffort BIB023 made the point that the goal of recognizing natural signing requires close collaboration with native signers and SL linguists. Also, as the field matures, it is timely to tackle the problem of reproducibility by establishing standard databases. There are already some efforts in this direction. Neidle et al. BIB024 describe a corpus of native ASL signing that is being collected for the purpose of linguistic research as well as for aiding vision-based sign recognition research. Other efforts in this direction include BIB007 , BIB025 , BIB031 . We mentioned in the introduction that methods developed to solve problems in SL recognition can be applied to non-SL domains. An example of this is Nam and Wohn's work ( BIB013 ) on recognizing deictic, mimetic and pictographic gestures. Each gesture was broken down into attributes of handshape, hand orientation, and movement in a manner similar to decomposing sign gestures into their components. They further decomposed movement into sequential subunits of movement primitives and HMMs were employed to explicitly model connecting movements, similar to the approach in . In BIB020 , Vogler et al. applied the framework of decomposing movement into sequential subunits for the analysis of human gait. Three different gaits (walking on level terrain, up a slope, down a slope) were distinguished by analyzing all gaits as consisting of subunits (half-steps) and modeling the subunits with HMMs.
|
Survey paper on intrusion detection techniques <s> 4) FUZZY CLUSTERING FOR IDS: <s> In his paper, we introduce a novel technique, called F-APACS, for mining jkzy association rules. &istlng algorithms involve discretizing the domains of quantitative attrilmtes into intervals so as to discover quantitative association rules. i%ese intervals may not be concise and meaning@ enough for human experts to easily obtain nontrivial knowledge from those rules discovered. Instead of using intervals, F-APACS employs linguistic terms to represent the revealed regularities and exceptions. The linguistic representation is especially usefil when those rules discovered are presented to human experts for examination. The definition of linguistic terms is based onset theory and hence we call the rides having these terms fuzzy association rules. The use of fq techniques makes F-APACS resilient to noises such as inaccuracies in physical measurements of real-life entities and missing values in the databases. Furthermore, F-APACS employs adjusted difference analysis which has the advantage that it does not require any user-supplied thresholds which are often hard to determine. The fact that F-APACS is able to mine fiuy association rules which utilize linguistic representation and that it uses an objective yet meanhg@ confidence measure to determine the interestingness of a rule makes it vety effective at the discovery of rules from a real-life transactional database of a PBX system provided by a telecommunication corporation <s> BIB001 </s> Survey paper on intrusion detection techniques <s> 4) FUZZY CLUSTERING FOR IDS: <s> The Fuzzy Intrusion Recognition Engine (FIRE) is a network intrusion detection system that uses fuzzy systems to assess malicious activity against computer networks. The system uses an agent-based approach to separate monitoring tasks. Individual agents perform their own fuzzification of input data sources. All agents communicate with a fuzzy evaluation engine that combines the results of individual agents using fuzzy rules to produce alerts that are true to a degree. Several intrusion scenarios are presented along with the fuzzy systems for detecting the intrusions. The fuzzy systems are tested using data obtained from networks under simulated attacks. The results show that fuzzy systems can easily identify port scanning and denial of service attacks. The system can be effective at detecting some types of backdoor and Trojan horse attacks. <s> BIB002
|
To The underline premise of our intrusion detection model is to describe attacks as instances of an ontology using a semantically rich language like DAML. This ontology capture information attacks such as the system component it affects, the consequences the attacks the mean of attack the location of attacker. Such target -centric ontology has been developed by under conferral, hence our intrusion detection model consist of two phases. The initial phase's data mining techniques to analyze data stream that capture process, system and network states and detect anomalous behavior and the second or high level phase reasons over data that is representative of the anomaly defined as instance of ontology. One way to build the models from these data streams is to use fuzzy clustering in which dissimilar matrix of object to be cluster as input. The objective function are based on selecting, representative objects from the features set in such a way that total fuzzy dissimilarity within each cluster is minimized BIB002 BIB001 .
|
Survey paper on intrusion detection techniques <s> F. Intrusion Detection based on K-Means Clustering and OneR Classification [19] <s> The process of monitoring the events occurring in a computer system or network and analyzing them for sign of intrusions is known as intrusion detection system (IDS). This paper presents two hybrid approaches for modeling IDS. Decision trees (DT) and support vector machines (SVM) are combined as a hierarchical hybrid intelligent system model (DT-SVM) and an ensemble approach combining the base classifiers. The hybrid intrusion detection model combines the individual base classifiers and other hybrid machine learning paradigms to maximize detection accuracy and minimize computational complexity. Empirical results illustrate that the proposed hybrid systems provide more accurate intrusion detection systems. <s> BIB001 </s> Survey paper on intrusion detection techniques <s> F. Intrusion Detection based on K-Means Clustering and OneR Classification [19] <s> Intrusion detection is a necessary step to identify unusual access or attacks to secure internal networks. In general, intrusion detection can be approached by machine learning techniques. In literature, advanced techniques by hybrid learning or ensemble methods have been considered, and related work has shown that they are superior to the models using single machine learning techniques. This paper proposes a hybrid learning model based on the triangle area based nearest neighbors (TANN) in order to detect attacks more effectively. In TANN, the k-means clustering is firstly used to obtain cluster centers corresponding to the attack classes, respectively. Then, the triangle area by two cluster centers with one data from the given dataset is calculated and formed a new feature signature of the data. Finally, the k-NN classifier is used to classify similar attacks based on the new feature represented by triangle areas. By using KDD-Cup '99 as the simulation dataset, the experimental results show that TANN can effectively detect intrusion attacks and provide higher accuracy and detection rates, and the lower false alarm rate than three baseline models based on support vector machines, k-NN, and the hybrid centroid-based classification model by combining k-means and k-NN. <s> BIB002
|
The approach, KM+1R, combines the k-means clustering with the OneR classification technique. The KDD Cup '99 set is used as a simulation dataset. The result shows that our proposed approach achieve a better accuracy and detection rate, particularly in reducing the false alarm. Related work and research publications based on hybrid approaches have been widely explored such as in BIB002 . The detection rate (DR), false positive (FP), false negative (FN), true positive (TP), false alarm (FA), and accuracy for each approach are also investigated. Each approach has distinctive strengths and weakness. Some approaches possess strength in detection but high in false alarm and vice versa. For instance, in the author proposed a new three-level decision tree classification, which focuses on the detection rate. Authors BIB001 model the IDS using a hierarchical hybrid intelligent system with the combination of decision tree and support vector machine (DT-SVM). While DT-SVM produces high detection rate, it lacks in the ability to differentiate attacks from normal behavior. More recently, approach as suggested by author BIB002 offers a high detection rate but comes with high false alarm rate as compared to others. In short, a number of hybrid techniques have been proposed in intrusion detection fields and related work; but there are still room to improve the accuracy and detection rate as well as the false alarm rate The main goal to utilize K-Means clustering approach is to split and to group data into normal and attack instances. K-Means clustering methods partition the input dataset into k-clusters according to an initial value known as the seed points into each cluster's centroids or cluster centers. The mean value of numerical data contained within each cluster is called centroids. The K-Means algorithm works as follows: 1. Select initial centers of the K clusters. Repeat step 2 through 3 until the cluster membership stabilizes. 2. Generate a new partition by assigning each data to its closest cluster centers.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> We introduce a Deep Stochastic IOC RNN Encoder-decoder framework, DESIRE, for the task of future predictions of multiple interacting agents in dynamic scenes. DESIRE effectively predicts future locations of objects in multiple scenes by 1) accounting for the multi-modal nature of the future prediction (i.e., given the same context, future may vary), 2) foreseeing the potential future outcomes and make a strategic prediction based on that, and 3) reasoning not only from the past motion history, but also from the scene context as well as the interactions among the agents. DESIRE achieves these in a single end-to-end trainable neural network model, while being computationally efficient. The model first obtains a diverse set of hypothetical future prediction samples employing a conditional variational auto-encoder, which are ranked and refined by the following RNN scoring-regression module. Samples are scored by accounting for accumulated future rewards, which enables better long-term strategic decisions similar to IOC frameworks. An RNN scene context fusion module jointly captures past motion histories, the semantic scene context and interactions among multiple agents. A feedback mechanism iterates over the ranking and refinement to further boost the prediction accuracy. We evaluate our model on two publicly available datasets: KITTI and Stanford Drone Dataset. Our experiments show that the proposed model significantly improves the prediction accuracy compared to other baseline methods. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> In this paper, we propose an efficient vehicle trajectory prediction framework based on recurrent neural network. Basically, the characteristic of the vehicle's trajectory is different from that of regular moving objects since it is affected by various latent factors including road structure, traffic rules, and driver's intention. Previous state of the art approaches use sophisticated vehicle behavior model describing these factors and derive the complex trajectory prediction algorithm, which requires a system designer to conduct intensive model optimization for practical use. Our approach is data-driven and simple to use in that it learns complex behavior of the vehicles from the massive amount of trajectory data through deep neural network model. The proposed trajectory prediction method employs the recurrent neural network called long short-term memory (LSTM) to analyze the temporal behavior and predict the future coordinate of the surrounding vehicles. The proposed scheme feeds the sequence of vehicles' coordinates obtained from sensor measurements to the LSTM and produces the probabilistic information on the future location of the vehicles over occupancy grid map. The experiments conducted using the data collected from highway driving show that the proposed method can produce reasonably good estimate of future trajectory. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> As part of a complete software stack for autonomous driving, NVIDIA has created a neural-network-based system, known as PilotNet, which outputs steering angles given images of the road ahead. PilotNet is trained using road images paired with the steering angles generated by a human driving a data-collection car. It derives the necessary domain knowledge by observing human drivers. This eliminates the need for human engineers to anticipate what is important in an image and foresee all the necessary rules for safe driving. Road tests demonstrated that PilotNet can successfully perform lane keeping in a wide variety of driving conditions, regardless of whether lane markings are present or not. ::: The goal of the work described here is to explain what PilotNet learns and how it makes its decisions. To this end we developed a method for determining which elements in the road image most influence PilotNet's steering decision. Results show that PilotNet indeed learns to recognize relevant objects on the road. ::: In addition to learning the obvious features such as lane markings, edges of roads, and other cars, PilotNet learns more subtle features that would be hard to anticipate and program by engineers, for example, bushes lining the edge of the road and atypical vehicle classes. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> To safely and efficiently navigate through complex traffic scenarios, autonomous vehicles need to have the ability to predict the future motion of surrounding vehicles. Multiple interacting agents, the multi-modal nature of driver behavior, and the inherent uncertainty involved in the task make motion prediction of surrounding vehicles a challenging problem. In this paper, we present an LSTM model for interaction aware motion prediction of surrounding vehicles on freeways. Our model assigns confidence values to maneuvers being performed by vehicles and outputs a multi-modal distribution over future motion based on them. We compare our approach with the prior art for vehicle motion prediction on the publicly available NGSIM US-101 and I-80 datasets. Our results show an improvement in terms of RMS values of prediction error. We also present an ablative analysis of the components of our proposed model and analyze the predictions made by the model in complex traffic scenarios. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> Recent algorithmic improvements and hardware breakthroughs resulted in a number of success stories in the field of AI impacting our daily lives. However, despite its ubiquity AI is only just starting to make advances in what may arguably have the largest impact thus far, the nascent field of autonomous driving. In this work we discuss this important topic and address one of crucial aspects of the emerging area, the problem of predicting future state of autonomous vehicle's surrounding necessary for safe and efficient operations. We introduce a deep learning-based approach that takes into account current state of traffic actors and produces rasterized representations of each actor's vicinity. The raster images are then used by deep convolutional models to infer future movement of actors while accounting for inherent uncertainty of the prediction task. Extensive experiments on real-world data strongly suggest benefits of the proposed approach. Moreover, following successful tests the system was deployed to a fleet of autonomous vehicles. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> Predicting trajectories of pedestrians is quintessential for autonomous robots which share the same environment with humans. In order to effectively and safely interact with humans, trajectory prediction needs to be both precise and computationally efficient. In this work, we propose a convolutional neural network (CNN) based human trajectory prediction approach. Unlike more recent LSTM-based moles which attend sequentially to each frame, our model supports increased parallelism and effective temporal representation. The proposed compact CNN model is faster than the current approaches yet still yields competitive results. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> Autonomous driving presents one of the largest problems that the robotics and artificial intelligence communities are facing at the moment, both in terms of difficulty and potential societal impact. Self-driving vehicles (SDVs) are expected to prevent road accidents and save millions of lives while improving the livelihood and life quality of many more. However, despite large interest and a number of industry players working in the autonomous domain, there still remains more to be done in order to develop a system capable of operating at a level comparable to best human drivers. One reason for this is high uncertainty of traffic behavior and large number of situations that an SDV may encounter on the roads, making it very difficult to create a fully generalizable system. To ensure safe and efficient operations, an autonomous vehicle is required to account for this uncertainty and to anticipate a multitude of possible behaviors of traffic actors in its surrounding. We address this critical problem and present a method to predict multiple possible trajectories of actors while also estimating their probabilities. The method encodes each actor’s surrounding context into a raster image, used as input by deep convolutional networks to automatically derive relevant features for the task. Following extensive offline evaluation and comparison to state-of-the-art baselines, the method was successfully tested on SDVs in closed-course tests. <s> BIB007
|
In terms of classifying objects from images, neural networks have seen a steady rise in popularity in recent years, particularly the more elaborate and complex convolutional and recurrent networks from the field of deep learning. Neural networks have the advantage of being able to learn important and robust features given training data that is relevant and in sufficient quantity. Considering that a significant percentage of automotive sensor data consists of images, convolutional neural networks (CNN) are seeing widespread use in the related literature, for both classification and tracking problems. The advantage of CNNs over more conventional classifiers lies in the convolutional layers, where various filters and feature maps are obtained during training. CNNs are capable of learning object features by means of multiple complex operations and optimizations, and the appropriate choice of network parameters and architecture can ensure that these features contain the most useful correlations that are needed for the robust identification of the targeted objects. While this choice is most often an empirical process, a wide assortment of network configurations exist in the related literature that are aimed at solving classification and tracking problems, with high accuracies claimed by the authors. Where object identification is concerned, in some cases the output of the fully-connected component of the CNN is used, while in other situations the values of the hidden convolutional layers are exploited in conjunction with other filtering and refining methods. Many of the approaches presented in the literature that are based on neural networks use either recurrent neural network (RNNs) which explicitly take into account a history composed of the past states of the actors, or simpler convolutional neural networks (CNNs). One of the most interesting systems, albeit quite complex, is DESIRE BIB001 , which has the goal of predicting the future locations of multiple interacting agents in dynamic (driving) scenes. It considers the multi-modal nature of the future prediction, i.e. given the same context, the future may vary. It may foresee the potential future outcomes and make a strategic prediction based on that, and it can reason not only from the past motion history, but also from the scene context as well as the interactions among the agents. DESIRE achieves these goals in a single end-to-end trainable neural network model, while being computationally efficient. Using a deep learning framework, DESIRE can simultaneously: generate diverse hypotheses to reflect a distribution over plausible futures, reason about the interactions between multiple dynamic objects and the scene context, and rank and refine hypotheses with consideration of long-term future rewards. The corresponding optimization problem tries to maximize the potential future reward of the prediction, using the following mechanisms ( Figure 9 ): 1. Diverse sample generation: a conditional variational auto-encoder (CVAE) is used to learn a sampling model that, given observations of past trajectories, produces a diverse set of prediction hypotheses to capture the multimodality of the space of plausible futures. The CVAE introduces a latent variable to account for the ambiguity of the future, which is combined with an RNN that encodes the past trajectories, to generate hypotheses using another RNN. Essentially, a CVAE introduces stochastic latent variables z i that are learned to encode a diverse set of predictions Y t given input X t , making it suitable for modeling one-to-many mappings; 2. IOC-based ranking and refinement: a ranking module determines the most likely hypotheses, while incorporating scene context and interactions. Since an optimal policy is hard to determine where multiple agents make strategic interdependent choices, the ranking objective is formulated to account for potential future rewards similar to inverse optimal control (IOC) or inverse reinforcement learning (IRL). This also ensures generalization to new situations further into the future, given limited training data. The module is trained in a multitask framework with a regression-based refinement of the predicted samples. In the testing phase, there are multiple iterations in order to obtain more accurate refinements of the future prediction. Predicting a distant future can be far more challenging than predicting a closer one. Therefore, an agent is trained to choose its actions that maximizes long-term rewards to achieve its goal. Instead of designing a reward function manually, IOC learns an unknown reward function. The RNN model assigns rewards to each prediction hypothesis and measures its goodness based on the accumulated long-term rewards; 3. Scene context fusion: this module aggregates the interactions between agents and the scene context encoded by a CNN. The fused embedding is channeled to the RNN scoring module and allows to produce the rewards based on the contextual information. In , a method to predict trajectories of surrounding vehicles is proposed using a long short-term memory (LSTM) network, with the goal of taking into account the relationship between the ego car and surrounding vehicles. The LSTM is a type of recurrent neural network (RNN) capable of learning long-term dependencies. Generally, an RNN has a vanishing gradient problem. An LSTM is able to deal with this through a forget gate, designed to control the information between the memory cells in order to store the most relevant previous data. The proposed method considers the ego car and four surrounding vehicles. It is assumed that drivers generally pay attention to the relative distance and speed with respect to the other cars when they intend to change a lane. Based on this assumption, the relative amounts between the target and the four surrounding vehicles are used as the input of the LSTM network. The feature vector x t at time Figure 10 : The architecture of the system BIB002 step t is defined by twelve features: lateral position of target vehicle, longitudinal position of target vehicle, lateral speed of target vehicle, longitudinal speed of target vehicle, relative distance between target and preceding vehicle, relative speed between target and preceding vehicle, relative distance between target and following vehicle, relative speed between target and following vehicle, relative distance between target and lead vehicle, relative speed between target and lead vehicle, relative distance between target and ego vehicle, and relative speed between target and ego vehicle. The input vector of the LSTM network is a sequence data with x t 's for past time steps. The output is the feature vector at the next time step t + 1. A trajectory is predicted by iteratively using the output result of the network as the input vector for the subsequent time step. In BIB002 an efficient trajectory prediction framework is proposed, which is also based on an LSTM. This approach is data-driven and learns complex behaviors of the vehicles from a massive amount of trajectory data. The LSTM receives the coordinates and velocities of the surrounding vehicles as inputs and produces probabilistic information about the future location of the vehicles over an occupancy grid map ( Figure 10 ). The experiments show that the proposed method has better prediction accuracy than Kalman filtering. The occupancy grid map is widely adopted for probabilistic localization and mapping. It reflects the uncertainty of the predicted trajectories. In BIB002 , the occupancy grid map is constructed by partitioning the range under consideration into several grid cells. The grid size is determined such that a grid cell approximately covers the quarter lane to recognize the movement of the vehicle on same lane as well as length of the vehicle (Figure 11 ). When predictions are needed for different time ranges (e.g., ∆ = 0.5s, 1s, 2s), the LSTM is trained independently for each time range. The LSTM produces the probability of occupancy for each grid cell. Let (x, y) be a two dimensional index for the occupancy grid. Then the softmax layer in the i th LSTM produces the probability P o (i x , i y ) for the grid element (i x , i y ). Finally, the outputs of the n LSTMs are combined using . The probability of occupancy P o (i x , i y ) summarizes the prediction of the future trajectory for all n vehicles in the single map. Alternatively, the same LSTM architecture can be used to directly predict the coordinates of a vehicle as a regression task. Instead of using the softmax layer to compute probabilities, the system can produce two real coordinate values x and y. In BIB004 , another LSTM model is described for interaction-aware motion prediction. Confidence values are assigned to the maneuvers that are performed by vehicles. Based on them, a multi-modal distribution over future motions is computed. More specifically, the model assigns probabilities for different maneuver classes, and outputs maneuver specific predictions for each maneuver class. The LSTM uses as input the track histories of the ego vehicle and its surrounding vehicles, and the lane structure of the freeway. It assigns confidence values to six maneuver classes and predicts a multi-modal distribution of the possibilities of future motion. Taking into account the time constraints of a real-time system, BIB005 uses simple feed-forward CNN architectures for the prediction task. Instead of manually defining features that represent the context for each actor, the authors rasterize the scene for each actor into an RGB image. Then, they train the CNN using these rasterized images as inputs to predict the actors' trajectories, where the network automatically infers the relevant features. Optionally, the model can also take as input a current state of the actor represented as a vector containing velocity, acceleration, and heading change rate (position and heading are not required because they are implicitly included in the raster image), and concatenate the resulting vector with the flattened output of the base CNN. Finally, the combined features are passed through a fully connected layer. A similar approach is used in BIB007 , which presents a method to predict multiple possible trajectories of actors while also estimating their probabilities. It encodes each actor's surrounding context into a raster image, used as input by a deep convolutional network to automatically derive the relevant features for the task. Given the raster image and the state estimates of actors at a time step, the CNN is used to predict a multitude of possible future state sequences, as well as the probability of each sequence. As part of a complete software stack for autonomous driving, NVIDIA created a system based on a CNN, called PilotNet BIB003 , which outputs steering angles given images of the road ahead. This system is trained using road images paired with the steering angles generated by a human driving a car that collects data. The authors developed a method for determining which elements in the road image influence its steering decision the most. It seems that in addition to learning the obvious features such as lane markings, edges of roads and other cars, the system learns more subtle features that would be hard to anticipate and program by engineers, e.g., bushes lining the edge of the road and atypical vehicle classes, while ignoring structures in the camera images that are not relevant to driving. This capability is derived from data without the need of hand-crafted rules. In , the authors propose a learnable end-to-end model with a deep neural network that reasons about both high level behavior and long-term trajectories. Inspired by how humans perform this task, the network exploits motion and prior knowledge about the road topology in the form of maps containing semantic elements such as lanes, intersections and traffic lights. The so-called IntentNet is a fully-convolutional neural network that outputs three types of variables in a single forward pass corresponding to: detection scores for vehicle and background classes, high level action probabilities corresponding to discrete intentions, and bounding box regressions in the current and future time steps to represent the intended trajectory. This design enables the system to propagate uncertainty through the different components and is reported to be computationally efficient. A CNN is also used in BIB006 for an end-to-end trajectory prediction model which is competitive with more complicated state-of-the-art LSTM-based techniques which require more contextual information. Highly parallelizable convolutional layers are employed to handle temporal dependencies. The CNN is a simple sequence-to-sequence architecture. Trajectory histories are used as input and embedded to a fixed size through a fully-connected layer. The convolutional layers are stacked and used to enforce temporal consistency. Finally, the features from the final convolutional layer are concatenated and passed through a fully-connected layer to generate all predicted positions at once. The authors found out that predicting one time step at a time leads to worse results than predicting all future times at once. A possible reason is that the error of the current prediction is propagated forward in time in a highly correlated fashion.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> Visual object tracking is a challenging computer vision problem with numerous real-world applications. This paper investigates the impact of convolutional features for the visual tracking problem. We propose to use activations from the convolutional layer of a CNN in discriminative correlation filter based tracking frameworks. These activations have several advantages compared to the standard deep features (fully connected layers). Firstly, they miti-gate the need of task specific fine-tuning. Secondly, they contain structural information crucial for the tracking problem. Lastly, these activations have low dimensionality. We perform comprehensive experiments on three benchmark datasets: OTB, ALOV300++ and the recently introduced VOT2015. Surprisingly, different to image classification, our results suggest that activations from the first layer provide superior tracking performance compared to the deeper layers. Our results further show that the convolutional features provide improved results compared to standard hand-crafted features. Finally, results comparable to state-of-the-art trackers are obtained on all three benchmark datasets. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data b ... <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking groundtruths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify target in each domain. We train each domain in the network iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance in existing tracking benchmarks. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> In this paper, we propose a novel online multi-object tracking (MOT) framework, which exploits features from multiple convolutional layers. In particular, we use the top layer to formulate a category-level classifier and use a lower layer to identify instances from one category under the intuition that lower layers contain much more details. To avoid the computational cost caused by online fine-tuning, we train our appearance model with an offline learning strategy using the historical appearance reserved for each object. We evaluate the proposed tracking framework on a popular MOT benchmark to demonstrate the effectiveness and the state-of-the-art performance of our tracker. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> Discriminative correlation filters (DCFs) have been shown to perform superiorly in visual tracking. They only need a small set of training samples from the initial frame to generate an appearance model. However, existing DCFs learn the filters separately from feature extraction, and update these filters using a moving average operation with an empirical weight. These DCF trackers hardly benefit from the end-to-end training. In this paper, we propose the CREST algorithm to reformulate DCFs as a one-layer convolutional neural network. Our method integrates feature extraction, response map generation as well as model update into the neural networks for an end-to-end training. To reduce model degradation during online update, we apply residual learning to take appearance changes into account. Extensive experiments on the benchmark datasets demonstrate that our CREST tracker performs favorably against state-of-the-art trackers. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> In recent years, Discriminative Correlation Filter (DCF) based methods have significantly advanced the state-of-the-art in tracking. However, in the pursuit of ever increasing tracking performance, their characteristic speed and real-time capability have gradually faded. Further, the increasingly complex models, with massive number of trainable parameters, have introduced the risk of severe over-fitting. In this work, we tackle the key causes behind the problems of computational complexity and over-fitting, with the aim of simultaneously improving both speed and performance. We revisit the core DCF formulation and introduce: (i) a factorized convolution operator, which drastically reduces the number of parameters in the model, (ii) a compact generative model of the training sample distribution, that significantly reduces memory and time complexity, while providing better diversity of samples, (iii) a conservative model update strategy with improved robustness and reduced complexity. We perform comprehensive experiments on four benchmarks: VOT2016, UAV123, OTB-2015, and TempleColor. When using expensive deep features, our tracker provides a 20-fold speedup and achieves a 13.0% relative gain in Expected Average Overlap compared to the top ranked method [12] in the VOT2016 challenge. Moreover, our fast variant, using hand-crafted features, operates at 60 Hz on a single CPU, while obtaining 65.0% AUC on OTB-2015. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> Convolutional neural network (CNN) has drawn increasing interest in visual tracking owing to its powerfulness in feature extraction. Most existing CNN-based trackers treat tracking as a classification problem. However, these trackers are sensitive to similar distractors because their CNN models mainly focus on inter-class classification. To address this problem, we use self-structure information of object to distinguish it from distractors. Specifically, we utilize recurrent neural network (RNN) to model object structure, and incorporate it into CNN to improve its robustness to similar distractors. Considering that convolutional layers in different levels characterize the object from different perspectives, we use multiple RNNs to model object structure in different levels respectively. Extensive experiments on three benchmarks, OTB100, TC-128 and VOT2015, show that the proposed algorithm outperforms other methods. Code is released at www.dabi.temple.edu/hbling/code/SANet/SANet.html. <s> BIB007 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> The robustness of the visual trackers based on the correlation maps generated from convolutional neural networks can be substantially improved if these maps are used to employed in conjunction with a particle filter. In this article, we present a particle filter that estimates the target size as well as the target position and that utilizes a new adaptive correlation filter to account for potential errors in the model generation. Thus, instead of generating one model which is highly dependent on the estimated target position and size, we generate a variable number of target models based on high likelihood particles, which increases in challenging situations and decreases in less complex scenarios. Experimental results on the Visual Tracker Benchmark vl.0 demonstrate that our proposed framework significantly outperforms state-of-the-art methods. <s> BIB008 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> In this paper we present a new approach for efficient regression based object tracking which we refer to as Deep- LK. Our approach is closely related to the Generic Object Tracking Using Regression Networks (GOTURN) framework of Held et al. We make the following contributions. First, we demonstrate that there is a theoretical relationship between siamese regression networks like GOTURN and the classical Inverse-Compositional Lucas & Kanade (IC-LK) algorithm. Further, we demonstrate that unlike GOTURN IC-LK adapts its regressor to the appearance of the currently tracked frame. We argue that this missing property in GOTURN can be attributed to its poor performance on unseen objects and/or viewpoints. Second, we propose a novel framework for object tracking - which we refer to as Deep-LK - that is inspired by the IC-LK framework. Finally, we show impressive results demonstrating that Deep-LK substantially outperforms GOTURN. Additionally, we demonstrate comparable tracking performance to current state of the art deep-trackers whilst being an order of magnitude (i.e. 100 FPS) computationally efficient. <s> BIB009 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> We present a system for keyframe-based dense camera tracking and depth map estimation that is entirely learned. For tracking, we estimate small pose increments between the current camera image and a synthetic viewpoint. This significantly simplifies the learning problem and alleviates the dataset bias for camera motions. Further, we show that generating a large number of pose hypotheses leads to more accurate predictions. For mapping, we accumulate information in a cost volume centered at the current depth estimate. The mapping network then combines the cost volume and the keyframe image to update the depth prediction, thereby effectively making use of depth measurements and image-based priors. Our approach yields state-of-the-art results with few images and is robust with respect to noisy camera poses. We demonstrate that the performance of our 6 DOF tracking competes with RGB-D tracking algorithms. We compare favorably against strong classic and deep learning powered dense depth algorithms. <s> BIB010 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> In recent years, regression trackers have drawn increasing attention in the visual-object tracking community due to their favorable performance and easy implementation. The tracker algorithms directly learn mapping from dense samples around the target object to Gaussian-like soft labels. However, in many real applications, when applied to test data, the extreme imbalanced distribution of training samples usually hinders the robustness and accuracy of regression trackers. In this paper, we propose a novel effective distractor-aware loss function to balance this issue by highlighting the significant domain and by severely penalizing the pure background. In addition, we introduce a full differentiable hierarchy-normalized concatenation connection to exploit abstractions across multiple convolutional layers. Extensive experiments were conducted on five challenging benchmark-tracking datasets, that is, OTB-13, OTB-15, TC-128, UAV-123, and VOT17. The experimental results are promising and show that the proposed tracker performs much better than nearly all the compared state-of-the-art approaches. <s> BIB011
|
Many results from the related literature systematically demonstrate that convolutional features are more useful for tracking than other explicitly-computed ones (Haar, FHOG, color labeling etc.). An example in this sense is BIB004 , which handles MOT using combinations of values from convolutional layers located at multiple levels. The method is based on the notion that lower-level layers account for a larger portion of the input image and therefore contain more details from the identified objects, making them useful, for instance, for handling occlusion. Conversely, top-level layers are more representative of semantics and are useful in distinguishing objects from the background. The proposed CNN architecture uses dual fully-connected components, for higher and lower-level features, which handle instance-level and category-level classification, respectively ( Figure 1 ). The proper identification of objects, particularly where occlusion events occur, involves the generation of appearance models of the tracked objects, which can result from the appropriate processing of the features learned within in CNN. On a similar note, notes that the output of the fully-connected component of a CNN is not suitable for handling infrared images. Their attempt to directly transfer CNNs pretrained with traditional images for use with infrared sensor data is unsuccessful, since only the information from the convolutional layers seems to be useful for this purpose. Furthermore, the layer data itself requires some level of adaptation to the specifics of infrared images. Typically, infrared data offers much less spatial information than visual images, and is much more suited, for example, in depth sensors for gathering distances to objects, albeit at a significantly lower resolution compared to regular image acquisition. As such, convolutional layers from infrared images are used in conjunction with correlation filters to generate a set of weak trackers which provides response maps with regard to the targets' locations. The weak trackers are then combined in ensembles which form stronger response maps with a much greater tracking accuracy. The response map of an image is, in general terms, in an intensity image where higher intensities indicate a change or a desired feature/shape/structure in the initial image, when exposed to an operator or correlation filter of some kind. By matching or fusing responses from multiple images within a video sequence, one could identify similar objects (i.e. the same pedestrian) across the sequence and subsequently construct their trajectories. The potential of correlation filters is also exploitable for regular images. These have the potential to boost the information extracted from the activations of CNN layers, for instance in BIB001 , where the authors find that by applying the appropriate filters to information drawn from shallow CNN layers, a level of robustness similar to using deeper layers or a combination of multiple layers can be achieved. In BIB008 , the authors also note the added robustness obtainable by post-filtering convolutional layers. By using particle and correlation filters, basic geometric and spatial features can be deduced for the tracked objects, which, together with a means of adaptively generating variable models, can be made to handle both simple and complex scenes. An alternative approach can be found in BIB005 , where discriminative correlation filters are used to generate an appearance model from a small number of samples. The overall approach is similar, involving feature extraction, post-processing, the generation of response maps for carrying out better model updates within the neural network. Contrary to other similar results, the correlation filters used throughout the system are learned within a one-layer CNN, which eventually can be used to make predictions based on the response maps. Furthermore, residual learning is employed in order to avoid model degradation, instead of the much more frequently-used method of stacking multiple layers. Other tracking methods learn a similar kind of mapping from samples in the vicinity of the target object using deep regression BIB009 , BIB011 , or by estimating and learning depth information BIB010 . The authors of BIB002 note that correlation filters have limitations imposed by the feature map resolution and propose a novel solution where features are learned in a continuous domain, using an appropriate interpolation model. This allows for the more effective resolution-independent compositing of multiple feature maps, resulting in superior classification results. Methods based on discriminative correlation filters are notoriously prone to excessive complexity and overfitting, and various means are available for optimizing the more traditional methods. The most noteworthy in this sense is BIB006 , who employs efficient convolution operators, a training sample distribution scheme and an optimal update strategy in an attempt to boost performance and reduce the number of parameters. A promising result which demonstrates significant robustness and accuracy is BIB003 , who use a CNN where the first set of layers are shared, as in a standard CNN; however at some point the layers branch into multiple domain-specific ones. This approach has the benefit of splitting the tracking problem into subproblems which are solved separately in their respective layer sets. Each domain has its own training sequences and be customized to can address a specific issue (such as distinguishing a target with specific shape parameters from the background). A similar concept, i.e. a network with components distinctly trained for a specific problem, can be found in BIB007 . In this case, multiple recurrent layers are used to model different structural properties of the tracked objects, which are incorporated into a parent CNN with the same purpose of improving accuracy and robustness. The RNN layers generate what the authors refer to as "structurally-aware feature maps" which, when combined with pooled versions of their non-structurally aware counterparts, significantly improve the classification results.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> In recent years, several methods have been developed to utilize hierarchical features learned from a deep convolutional neural network (CNN) for visual tracking. However, as features from a certain CNN layer characterize an object of interest from only one aspect or one level, the performance of such trackers trained with features from one layer (usually the second to last layer) can be further improved. In this paper, we propose a novel CNN based tracking framework, which takes full advantage of features from different CNN layers and uses an adaptive Hedge method to hedge several CNN based trackers into a single stronger one. Extensive experiments on a benchmark dataset of 100 challenging image sequences demonstrate the effectiveness of the proposed algorithm compared to several state-of-theart trackers. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Simple Online and Realtime Tracking (SORT) is a pragmatic approach to multiple object tracking with a focus on simple, effective algorithms. In this paper, we integrate appearance information to improve the performance of SORT. Due to this extension we are able to track objects through longer periods of occlusions, effectively reducing the number of identity switches. In spirit of the original framework we place much of the computational complexity into an offline pre-training stage where we learn a deep association metric on a large-scale person re-identification dataset. During online application, we establish measurement-to-track associations using nearest neighbor queries in visual appearance space. Experimental evaluation shows that our extensions reduce the number of identity switches by 45%, achieving overall competitive performance at high frame rates. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> In this paper, we propose a CNN-based framework for online MOT. This framework utilizes the merits of single object trackers in adapting appearance models and searching for target in the next frame. Simply applying single object tracker for MOT will encounter the problem in computational efficiency and drifted results caused by occlusion. Our framework achieves computational efficiency by sharing features and using ROI-Pooling to obtain individual features for each target. Some online learned target-specific CNN layers are used for adapting the appearance model for each target. In the framework, we introduce spatial-temporal attention mechanism (STAM) to handle the drift caused by occlusion and interaction among targets. The visibility map of the target is learned and used for inferring the spatial attention map. The spatial attention map is then applied to weight the features. Besides, the occlusion status can be estimated from the visibility map, which controls the online updating process via weighted loss on training samples with different occlusion statuses in different frames. It can be considered as temporal attention mechanism. The proposed algorithm achieves 34.3% and 46.0% in MOTA on challenging MOT15 and MOT16 benchmark dataset respectively. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Recently deep neural networks have been widely employed to deal with the visual tracking problem. In this work, we present a new deep architecture which incorporates the temporal and spatial information to boost the tracking performance. Our deep architecture contains three networks, a Feature Net, a Temporal Net, and a Spatial Net. The Feature Net extracts general feature representations of the target. With these feature representations, the Temporal Net encodes the trajectory of the target and directly learns temporal correspondences to estimate the object state from a global perspective. Based on the learning results of the Temporal Net, the Spatial Net further refines the object tracking state using local spatial object information. Extensive experiments on four of the largest tracking benchmarks, including VOT2014, VOT2016, OTB50, and OTB100, demonstrate competing performance of the proposed tracker over a number of state-of-the-art algorithms. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Most of the existing tracking methods based on CNN(convolutional neural networks) are too slow for real-time application despite the excellent tracking precision compared with the traditional ones. In this paper, a fast dynamic visual tracking algorithm combining CNN based MDNet(Multi-Domain Network) and RoIAlign was developed. The major problem of MDNet also lies in the time efficiency. Considering the computational complexity of MDNet is mainly caused by the large amount of convolution operations and fine-tuning of the network during tracking, a RoIPool layer which could conduct the convolution over the whole image instead of each RoI is added to accelerate the convolution and a new strategy of fine-tuning the fully-connected layers is used to accelerate the update. With RoIPool employed, the computation speed has been increased but the tracking precision has dropped simultaneously. RoIPool could lose some positioning precision because it can not handle locations represented by floating numbers. So RoIAlign, instead of RoIPool, which can process floating numbers of locations by bilinear interpolation has been added to the network. The results show the target localization precision has been improved and it hardly increases the computational cost. These strategies can accelerate the processing and make it 7x faster than MDNet with very low impact on precision and it can run at around 7 fps. The proposed algorithm has been evaluated on two benchmarks: OTB100 and VOT2016, on which high precision and speed have been obtained. The influence of the network structure and training data are also discussed with experiments. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Multi-People Tracking in an open-world setting requires a special effort in precise detection. Moreover, temporal continuity in the detection phase gains more importance when scene cluttering introduces the challenging problems of occluded targets. For the purpose, we propose a deep network architecture that jointly extracts people body parts and associates them across short temporal spans. Our model explicitly deals with occluded body parts, by hallucinating plausible solutions of not visible joints. We propose a new end-to-end architecture composed by four branches (visible heatmaps, occluded heatmaps, part affinity fields and temporal affinity fields) fed by a time linker feature extractor. To overcome the lack of surveillance data with tracking, body part and occlusion annotations we created the vastest Computer Graphics dataset for people tracking in urban scenarios by exploiting a photorealistic videogame. It is up to now the vastest dataset (about 500.000 frames, almost 10 million body poses) of human body parts for people tracking in urban scenarios. Our architecture trained on virtual data exhibits good generalization capabilities also on public real tracking benchmarks, when image resolution and sharpness are high enough, producing reliable tracklets useful for further batch data association or re-id modules. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> In the field of generic object tracking numerous attempts have been made to exploit deep features. Despite all expectations, deep trackers are yet to reach an outstanding level of performance compared to methods solely based on handcrafted features. In this paper, we investigate this key issue and propose an approach to unlock the true potential of deep features for tracking. We systematically study the characteristics of both deep and shallow features, and their relation to tracking accuracy and robustness. We identify the limited data and low spatial resolution as the main challenges, and propose strategies to counter these issues when integrating deep features for tracking. Furthermore, we propose a novel adaptive fusion approach that leverages the complementary properties of deep and shallow features to improve both robustness and accuracy. Extensive experiments are performed on four challenging datasets. On VOT2017, our approach significantly outperforms the top performing tracker from the challenge with a relative gain of 17% in EAO. <s> BIB007 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Multi-Target Multi-Camera Tracking (MTMCT) tracks many people through video taken from several cameras. Person Re-Identification (Re-ID) retrieves from a gallery images of people similar to a person query image. We learn good features for both MTMCT and Re-ID with a convolutional neural network. Our contributions include an adaptive weighted triplet loss for training and a new technique for hard-identity mining. Our method outperforms the state of the art both on the DukeMTMC benchmarks for tracking, and on the Market-1501 and DukeMTMC-ReID benchmarks for Re-ID. We examine the correlation between good Re-ID and good MTMCT scores, and perform ablation studies to elucidate the contributions of the main components of our system. Code is available1. <s> BIB008 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Autonomous driving presents one of the largest problems that the robotics and artificial intelligence communities are facing at the moment, both in terms of difficulty and potential societal impact. Self-driving vehicles (SDVs) are expected to prevent road accidents and save millions of lives while improving the livelihood and life quality of many more. However, despite large interest and a number of industry players working in the autonomous domain, there still remains more to be done in order to develop a system capable of operating at a level comparable to best human drivers. One reason for this is high uncertainty of traffic behavior and large number of situations that an SDV may encounter on the roads, making it very difficult to create a fully generalizable system. To ensure safe and efficient operations, an autonomous vehicle is required to account for this uncertainty and to anticipate a multitude of possible behaviors of traffic actors in its surrounding. We address this critical problem and present a method to predict multiple possible trajectories of actors while also estimating their probabilities. The method encodes each actor’s surrounding context into a raster image, used as input by deep convolutional networks to automatically derive relevant features for the task. Following extensive offline evaluation and comparison to state-of-the-art baselines, the method was successfully tested on SDVs in closed-course tests. <s> BIB009 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Recent progresses in model-free single object tracking (SOT) algorithms have largely inspired applying SOT to multi-object tracking (MOT) to improve the robustness as well as relieving dependency on external detector. However, SOT algorithms are generally designed for distinguishing a target from its environment, and hence meet problems when a target is spatially mixed with similar objects as observed frequently in MOT. To address this issue, in this paper we propose an instance-aware tracker to integrate SOT techniques for MOT by encoding awareness both within and between target models. In particular, we construct each target model by fusing information for distinguishing target both from background and other instances (tracking targets). To conserve uniqueness of all target models, our instance-aware tracker considers response maps from all target models and assigns spatial locations exclusively to optimize the overall accuracy. Another contribution we make is a dynamic model refreshing strategy learned by a convolutional neural network. This strategy helps to eliminate initialization noise as well as to adapt to variation of target size and appearance. To show the effectiveness of the proposed approach, it is evaluated on the popular MOT15 and MOT16 challenge benchmarks. On both benchmarks, our approach achieves the best overall performances in comparison with published results. <s> BIB010
|
Appearance models offer high-level features which are also used to account for occlusion in much simpler and efficient systems, such as in BIB002 , where computed appearance descriptors form an appearance space. With properly-determined metrics, observations having a similar appearance are identified using a nearest-neighbor-based approach. Switching from image-space to an appearance space seems to substantially account for occlusions, reducing their negative impact at a negligible cost in terms of performance. A possible alternative to appearance-based classification is the use of template-based metrics. Such an approach uses a reference region of interest (ROI) drawn from one or multiple frames and attempts to match it in subsequent frames using an appropriately-constructed metric. Template-based methods often work for partial detections, thereby accounting for occlusion and/or noise, considering that the template needs not be perfectly or completely matched for a successful detection to occur. An example of a template-based method is provided by , which involves three CNNs, one for template generation, one dedicated to region searching and one for handling background Figure 2 : A CNN-based model that uses ROI-pooling and shared features for target classification BIB003 areas. The method is somewhat similar to what could be achieved by a generative adversarial network (GAN), since the "searcher" network attempts to fit multiple subimages within the positive detections provided by the template component while simultaneously attempting to maximize the distance to the negative background component. The candidate subimages generated by the three components are fed through a loss function which is designed to favor candidates which are closer to template regions than to background ones. While performance-wise such a approach is claimed to provide impressive framerates, care should be taken when using template or reference-based methods. These are generally suited for situations where there is no significant variation in the overall tone of the frames. Such methods have a much higher failure rate when, for instance, the lighting conditions change during tracking, such as when the tracked object moves from a brightly-lit to a shaded area. An improvement on the use of appearance and shared tracking information is provided by BIB003 in the form of a CNN-based single object tracker which generates and adapts the appearance models for multi-frame detection ( Figure 2 ). The use of pooling layers and shared features accounts for drift effects caused by occlusion and inter-object dependency, as part of a spatial and temporal attention mechanism which is responsible for dynamically discriminating between training candidates based on the level of occlusion. As such, training samples are weighted based on their occlusion status, which optimizes the training process both in terms of the resulting classification accuracy, and performance. Generally speaking, pooling operations have two important effects: on the one hand, the subimage of the feature map being analyzed is increased, since a pooled feature map contains information from a larger area of the originating image; on the other hand, the reduced size of a pooled map means fewer computational resources are required to process it which positively impacts performance. The major downside of pooling is that spatial positioning is further diluted with each additional layer. Multiple related papers involve the so called "ROI pooling", which commonly refers to a pooling operation being applied to the bounding box of an identified object in hope that the reduced representation will gain robustness to noise and variations of the object's geometry across multiple frames. ROI pooling is successfully used by BIB005 to improve the performance of their CNN-based classifier. The authors observe that positioning cues are adversely affected by pooling, to which a potential solution is to reposition the mis-aligned ROIs via bilinear interpolation. This reinterpretation of pooling in referred to as "ROI align". The gain in performance is significant, while the authors demonstrate that the positioning of the ROIs is stabilized. Tracking stabilization is fundamental in automotive application, where effects such as jittering, camera shaking and spatial/temporal noise commonly occur. In terms of ensuring ROI stability and accuracy, occlusion plays an important role. Some authors handle this topic extensively, such as BIB006 which proposes a deep neural network for tracking occluded body parts, by processing features extracted from a VGG19 network. Some authors use different interpretations of the feature concept, adapted to the specifics of autonomous driving. BIB009 create custom feature maps by encoding various properties of the detections (bounding boxes, positions, velocities, accelerations etc.) in raster images. These images are sent though a CNN which generates raster features that the authors demonstrate to provide more reliable correlations and more accurate trajectories than using features derived directly from raw data. The idea of tracking robustness and stability is sometimes solvable using image and object fusion. The related methods are referred to as being "instance-aware", meaning that a targeted object is matched across the image space and across multiple frames by fusing identified objects with similar characteristics. BIB010 proposes a fusion-based method that uses single-object tracking to identify multiple candidate instances and subsequently builds target models for potential objects by fusing information from detections and background cues. The models are updated using a CNN, which ensures robustness to noise, scaling and minor variations of the targets' appearance. As with many other related approaches, an online implementation offloads most of the processing to an external server leaving the embedded device from the vehicle to carry out only minor and frequently-needed tasks. Since quick reactions of the system are crucial for proper and safe vehicle operation, performance and a rapid response of the underlying software is essential, which is why the online approach is popular in this field. Also in the context of ensuring robustness and stability, some authors apply fusion techniques to information extracted from CNN layers. It has been previously mentioned that important correlations can be drawn from deep and shallow layers which can be exploited together for identifying robust features in the data. This principle is used for instance in BIB007 , where, in order to ensure robustness and performance, various features extracted from layers in different parts of a CNN are fused to form stronger characteristics which are affected to a lesser degree by noise, spatial variation and perturbations in the acquired images. The identified relationships between CNN layers are exploited in order to account for lost spatial information which occurs in deeper layers. The method is claimed to have improved accuracy over the state-of-the-art of the time, which is consistent with the idea of ensuring robustness and low failure rates. Deeper features are more consistent and allow for stronger classification, while shallow features compensate for the detrimental effects of filtering and pooling, where relative positioning information may be lost. This allows for deep features to be better integrated into the spatial context of the images. On a similar note, in BIB001 features from multiple layers which individually constitute weak trackers are combined to form a stronger one, by means of a hedging algorithm. The practice of using multiple weak methods into a more effective one has significant potential and is based on the principle that each individual weak component contains some piece of meaningful information on the tracked object, while also having useless data mostly found in the form of noise. By appropriately combining the contributions of each weak component, a stronger one can be generated. As such, methods that exploit compound classifiers typically show robustness to variances of illumination, affine transforms, camera shaking etc. The downside of such methods comes from the need to compute multiple groups of weak features, which causes penalties in realtime response, while the fusion algorithm comes with an additional overhead in terms of impacting performance. Alternative approaches exist which mitigate this to some extent, such as the use of multiple sensors which directly provide data, as opposed to relying on multiple features computed from the same camera or pair of cameras. An example in this direction is provided in BIB008 , where an image gallery from a multi-camera system is fed into a CNN in an attempt to solve multi-target multi-camera tracking and target re-identification problems. For correct and consistent re-identification, an observation in a specific image is matched against several ones from other cameras using correlations as part of a similarity metric. Such correlation among images from multiple cameras are learned during training and subsequently clustered to provide a unified agreement between them. Eventually, after a training process that exploits a custom triplet loss function, features are obtained to be further used in the identification process. In terms of performance, the method boasts substantial accuracy considering the multi-camera setup. The idea of compositing robust features from a multi-faceted architecture is further exploited in works such as BIB004 , where a triple-net setup is used to generate features that account for appearance, spatial cues and temporal consistency.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> In this paper, we study a discriminatively trained deep convolutional network for the task of visual tracking. Our tracker utilizes both motion and appearance features that are extracted from a pre-trained dual stream deep convolution network. We show that the features extracted from our dual-stream network can provide rich information about the target and this leads to competitive performance against state of the art tracking methods on a visual tracking benchmark. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> Recent approaches for high accuracy detection and tracking of object categories in video consist of complex multistage solutions that become more cumbersome each year. In this paper we propose a ConvNet architecture that jointly performs detection and tracking, solving the task in a simple and effective way. Our contributions are threefold: (i) we set up a ConvNet architecture for simultaneous detection and tracking, using a multi-task objective for frame-based object detection and across-frame track regression; (ii) we introduce correlation features that represent object co-occurrences across time to aid the ConvNet during tracking; and (iii) we link the frame level detections based on our across-frame tracklets to produce high accuracy detections at the video level. Our ConvNet architecture for spatiotemporal object detection is evaluated on the large-scale ImageNet VID dataset where it achieves state-of-the-art results. Our approach provides better single model performance than the winning method of the last ImageNet challenge while being conceptually much simpler. Finally, we show that by increasing the temporal stride we can dramatically increase the tracker speed. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> Convolutional Neural Network (CNN) based methods have shown significant performance gains in the problem of visual tracking in recent years. Due to many uncertain changes of objects online, such as abrupt motion, background clutter and large deformation, the visual tracking is still a challenging task. We propose a novel algorithm, namely Deep Location-Specific Tracking, which decomposes the tracking problem into a localization task and a classification task, and trains an individual network for each task. The localization network exploits the information in the current frame and provides a specific location to improve the probability of successful tracking, while the classification network finds the target among many examples generated around the target location in the previous frame, as well as the one estimated from the localization network in the current frame. CNN based trackers often have massive number of trainable parameters, and are prone to over-fitting to some particular object states, leading to less precision or tracking drift. We address this problem by learning a classification network based on 1 × 1 convolution and global average pooling. Extensive experimental results on popular benchmark datasets show that the proposed tracker achieves competitive results without using additional tracking videos for fine-tuning. The code is available at https://github.com/ZjjConan/DLST <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> We propose Quadruplet Convolutional Neural Networks (Quad-CNN) for multi-object tracking, which learn to associate object detections across frames using quadruplet losses. The proposed networks consider target appearances together with their temporal adjacencies for data association. Unlike conventional ranking losses, the quadruplet loss enforces an additional constraint that makes temporally adjacent detections more closely located than the ones with large temporal gaps. We also employ a multi-task loss to jointly learn object association and bounding box regression for better localization. The whole network is trained end-to-end. For tracking, the target association is performed by minimax label propagation using the metric learned from the proposed network. We evaluate performance of our multi-object tracking algorithm on public MOT Challenge datasets, and achieve outstanding results. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> Data association problems are an important component of many computer vision applications, with multi-object tracking being one of the most prominent examples. A typical approach to data association involves finding a graph matching or network flow that minimizes a sum of pairwise association costs, which are often either hand-crafted or learned as linear functions of fixed features. In this work, we demonstrate that it is possible to learn features for network-flow-based data association via backpropagation, by expressing the optimum of a smoothed network flow problem as a differentiable function of the pairwise association costs. We apply this approach to multi-object tracking with a network flow formulation. Our experiments demonstrate that we are able to successfully learn all cost functions for the association problem in an end-to-end fashion, which outperform hand-crafted costs in all settings. The integration and combination of various sources of inputs becomes easy and the cost functions can be learned entirely from data, alleviating tedious hand-designing of costs. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> In this paper, we propose a CNN-based framework for online MOT. This framework utilizes the merits of single object trackers in adapting appearance models and searching for target in the next frame. Simply applying single object tracker for MOT will encounter the problem in computational efficiency and drifted results caused by occlusion. Our framework achieves computational efficiency by sharing features and using ROI-Pooling to obtain individual features for each target. Some online learned target-specific CNN layers are used for adapting the appearance model for each target. In the framework, we introduce spatial-temporal attention mechanism (STAM) to handle the drift caused by occlusion and interaction among targets. The visibility map of the target is learned and used for inferring the spatial attention map. The spatial attention map is then applied to weight the features. Besides, the occlusion status can be estimated from the visibility map, which controls the online updating process via weighted loss on training samples with different occlusion statuses in different frames. It can be considered as temporal attention mechanism. The proposed algorithm achieves 34.3% and 46.0% in MOTA on challenging MOT15 and MOT16 benchmark dataset respectively. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> In this paper, we propose the methods to handle temporal errors during multi-object tracking. Temporal error occurs when objects are occluded or noisy detections appear near the object. In those situations, tracking may fail and various errors like drift or ID-switching occur. It is hard to overcome temporal errors only by using motion and shape information. So, we propose the historical appearance matching method and joint-input siamese network which was trained by 2-step process. It can prevent tracking failures although objects are temporally occluded or last matching information is unreliable. We also provide useful technique to remove noisy detections effectively according to scene condition. Tracking performance, especially identity consistency, is highly improved by attaching our methods. <s> BIB007 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> Multiple Object Tracking (MOT) plays an important role in solving many fundamental problems in video analysis and computer vision. Most MOT methods employ two steps: Object Detection and Data Association. The first step detects objects of interest in every frame of a video, and the second establishes correspondence between the detected objects in different frames to obtain their tracks. Object detection has made tremendous progress in the last few years due to deep learning. However, data association for tracking still relies on hand crafted constraints such as appearance, motion, spatial proximity, grouping etc. to compute affinities between the objects in different frames. In this paper, we harness the power of deep learning for data association in tracking by jointly modeling object appearances and their affinities between different frames in an end-to-end fashion. The proposed Deep Affinity Network (DAN) learns compact, yet comprehensive features of pre-detected objects at several levels of abstraction, and performs exhaustive pairing permutations of those features in any two frames to infer object affinities. DAN also accounts for multiple objects appearing and disappearing between video frames. We exploit the resulting efficient affinity computations to associate objects in the current frame deep into the previous frames for reliable on-line tracking. Our technique is evaluated on popular multiple object tracking challenges MOT15, MOT17 and UA-DETRAC. Comprehensive benchmarking under twelve evaluation metrics demonstrates that our approach is among the best performing techniques on the leader board for these challenges. The open source implementation of our work is available at https://github.com/shijieS/SST.git. <s> BIB008 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> This paper proposes a novel model, named Continuity-Discrimination Convolutional Neural Network (CD-CNN), for visual object tracking. Existing state-of-the-art tracking methods do not deal with temporal relationship in video sequences, which leads to imperfect feature representations. To address this problem, CD-CNN models temporal appearance continuity based on the idea of temporal slowness. Mathematically, we prove that, by introducing temporal appearance continuity into tracking, the upper bound of target appearance representation error can be sufficiently small with high probability. Further, in order to alleviate inaccurate target localization and drifting, we propose a novel notion, object-centroid, to characterize not only objectness but also the relative position of the target within a given patch. Both temporal appearance continuity and object-centroid are jointly learned during offline training and then transferred for online tracking. We evaluate our tracker through extensive experiments on two challenging benchmarks and show its competitive tracking performance compared with state-of-the-art trackers. <s> BIB009 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> Visual attention, derived from cognitive neuroscience, facilitates human perception on the most pertinent subset of the sensory data. Recently, significant efforts have been made to exploit attention schemes to advance computer vision systems. For visual tracking, it is often challenging to track target objects undergoing large appearance changes. Attention maps facilitate visual tracking by selectively paying attention to temporal robust features. Existing tracking-by-detection approaches mainly use additional attention modules to generate feature weights as the classifiers are not equipped with such mechanisms. In this paper, we propose a reciprocative learning algorithm to exploit visual attention for training deep classifiers. The proposed algorithm consists of feed-forward and backward operations to generate attention maps, which serve as regularization terms coupled with the original classification loss function for training. The deep classifier learns to attend to the regions of target objects robust to appearance changes. Extensive experiments on large-scale benchmark datasets show that the proposed attentive tracking method performs favorably against the state-of-the-art approaches. <s> BIB010 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> In this paper, we propose a unified Multi-Object Tracking (MOT) framework learning to make full use of long term and short term cues for handling complex cases in MOT scenes. Besides, for better association, we propose switcher-aware classification (SAC), which takes the potential identity-switch causer (switcher) into consideration. Specifically, the proposed framework includes a Single Object Tracking (SOT) sub-net to capture short term cues, a re-identification (ReID) sub-net to extract long term cues and a switcher-aware classifier to make matching decisions using extracted features from the main target and the switcher. Short term cues help to find false negatives, while long term cues avoid critical mistakes when occlusion happens, and the SAC learns to combine multiple cues in an effective way and improves robustness. The method is evaluated on the challenging MOT benchmarks and achieves the state-of-the-art results. <s> BIB011 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> In this paper, we propose an online Multi-Object Tracking (MOT) approach which integrates the merits of single object tracking and data association methods in a unified framework to handle noisy detections and frequent interactions between targets. Specifically, for applying single object tracking in MOT, we introduce a cost-sensitive tracking loss based on the state-of-the-art visual tracker, which encourages the model to focus on hard negative distractors during online learning. For data association, we propose Dual Matching Attention Networks (DMAN) with both spatial and temporal attention mechanisms. The spatial attention module generates dual attention maps which enable the network to focus on the matching patterns of the input image pair, while the temporal attention module adaptively allocates different levels of attention to different samples in the tracklet to suppress noisy observations. Experimental results on the MOT benchmark datasets show that the proposed algorithm performs favorably against both online and offline trackers in terms of identity-preserving metrics. <s> BIB012
|
One of the most significant challenges for autonomous driving is accounting for temporal coherence in tracking. Since most if not all automotive scenarios involve video and motion across multiple frames, handling image sequence data and accounting for temporal consistency are key factors in ensuring successful predictions, accuracy and the reliability of the systems involved. Essentially, solving temporal tracking is a compound problem and involves, on the one hand, tracking objects in single images considering all the problems induced by noise, geometry and the lack of spatial information and, on the other hand, making sure that the tracking is consistent across multiple frames, that is, assigning correct IDs to the same objects in a continuous video sequence. This presents a lot of challenges, for instance when objects become occluded in some frames and are exposed in others. In other cases, the tracked objects suffer affine transformations across frames, of which rotation and shearing are notoriously difficult to handle. Additionally, the objects may change shape due to noise, aliasing and other acquisition-related artifacts that may be present in the images, since video is rarely if ever acquired at "high enough" resolution and is in many cases in some lossy compressed format. As such, the challenge is to identify features that are robust enough to handle proper classification and to ensure temporal consistency considering all pitfalls associated with processing video data. This often involves a "focus and context" approach, where key targets are identified in images not only by the features that they exhibit in that particular image, but by also ensuring that the feature extraction method also accounts for the information provided by the context which the tracked object finds itself in. In other words, processing a key frame in a video sequence, which provides the focus, should account for the context information that has been drawn up from previous frames. Where supervised algorithms are concerned, one popular approach is to integrate recurrent components into the classifier, which inherently account for the context provided by a set of elements from a sequence. Recurrent neural networks (RNN) and, more specifically, long short-term memory (LSTM) layers are frequently present in the related literature where temporal data is concerned. When training and exploiting RNN layers to classify sequences, the results from one frame carry over to the computations that take place for subsequent frames. As such, when processing the current frame, resulting detections also account for what was found in previous frames. For automotive applications, one advantage of neural networks is that they can be trained off-site, while the resulting model can be ported to the embedded device in the vehicle where predictions and tracking can occur at usable speeds. While training a recurrent network or multiple collaborating networks can take a long time, forward-propagating new data can happen quite fast, making these algorithms a realistic choice for real-time tracking. LSTMs are however not the "magic" solution, nor the de facto method for handling sequence data, since many authors have successfully achieved high accuracy results using only CNNs. Additionally, many authors have found it helpful to use dual neural networks in conjunction, where one network processes spatial information while the other handles temporal consistency and motion. Other methods employ siamese networks, i.e. identical classifiers trained differently which identify different features using similar processing. One example of a dual-streaming network is in where appearance and motion are handled by a combination of CNNs which work together within a unified framework. The motion component uses spotlight filtering over feature maps which result from subtracting features drawn from dual CNNs and generates a space-invariant feature map using pooling and fusion operations. The other component handles appearance by filtering and fusing features from a different arrangement of convolutional layers. Data from ROIs in the acquired images is passed on to both components and motion responses from one component are correlated with appearance responses from the other. Both components produce feature maps which are composed together to form space-and motion-invariant characteristics to be further used for target identification. Another concept which consistently appears in the related literature is "historical matching" where attempts are made to carry over part of the characteristics of tracked objects across multiple frames, by building an affinity model from shape, appearance, positional and motion cues. This is achieved in BIB007 using dual CNNs with multistep training, which handle appearance matching using various filtering operations and linearly composing the resulting features across multiple timestamps. The notion of determining and preserving affinity is also exploited in BIB008 where data consisting of frame pairs several timestamps apart are fed into dual VGG networks. The resulting features are permuted Figure 3 : A dual CNN detector that extracts and correlates features from frame pairs BIB002 and incorporated into association matrices which are further used to compute object affinities. This approach has the benefit of partially accounting for occlusion using only a limited number of frames, since the affinity of an object which is partially occluded in one frame may be preserved if it appears fully in the pair frame. Ensuring the continuity of high-level features such as appearance models is not a trivial task, and multiple solutions exist. For example BIB009 uses a CNN modified with a discriminative component intended to correct for temporal errors that may accumulate in the appearance of tracked objects across multiple frames. Discriminative network behavior is also exploited in BIB001 where selectively trained dual networks are used to generate and correlate appearance with a motion stream. Also, decomposing the tracking problem into localization and motion using multiple component networks is a frequently-encountered solution, further exploited in works such as BIB003 , BIB002 . As such, using two networks that work in tandem is a popular approach and seems to provide accurate results throughout the available literature ( Figure 3 ). Some authors take this concept further by employing several such networks BIB004 , each of which contributes features exhibiting specific and limited correlations, which, when joined together, from a complete appearance model of the tracked objects. Other approaches map network components to flow graphs, the traversal of which enables optimal cost-function and feature learning BIB005 . It is worthy of noting that the more complicated the architecture of the classifier, the more elaborate the training process and the poorer the performance. A careful balance should therefore be reached between the complexity of the classifier, the completeness of the resulting features and the amount of processing and training data needed to produce high-accuracy results at a cost in computational resources which is consistent with the needs of automotive applications. In BIB011 , the idea of object matching from frame pairs is further explored using a three-component setup: a siamese network configuration handles single object tracking and generates short-term cues in the form of tracklet images, while a modified version of GoogLeNet generates re-identification features from multiple tracklets. The third component is based on the idea that there may be a large overlap in the previously-computed features, which are consequently treated as switcher candidates. As a result, a switcher-aware logic handles the situation where IDs of different objects may be interchanged during frame sequences mainly as a result of partial occlusion. It is worth mentioning that the tendency in ensuring accurate tracking is to come up with inventive features which express increasingly-abstract concepts. It has been demonstrated throughout the related literature that, in general, the more abstract the feature, the more reliable it is long term. Therefore, a lot of effort is directed toward identifying object features that are not necessarily direct indicators of shape, position and/or geometry, but are rather higher-level, more abstract representations of how the object fits within the overall context of the acquired video sequence. Examples of such concept are the previously-mentioned "affinity"; another is "attention", where some authors propose neural-network-based solutions for estimating attention and generating attention maps. BIB006 computes attention features which are spatially and temporally sound using an arrangement of ROI identification and pooling operations. BIB012 uses attention cues to handle the inherent noise from conventional detection methods, as well as to compensate for frequent interactions and overlaps among tracked targets. A two-component system handles noise and occlusion and produces spatial attention maps by matching similar regions from pair frames, while temporal coherence is achieved by weighing observations across the trajectory differently, thereby assigning them different levels of attention, which generates filtering criteria used to successfully account for similar observations while eliminating dissimilar ones. Another noteworthy contribution is BIB010 , where attention maps are generated using reciprocative learning, where the input frame is sent back-and-forth through several convolutional layers: in the forward propagation phase classification scores are generated, while the back-propagation produces attention maps from the gradients of the previously-obtained scores. The computed maps are further used as regularization terms within a classifier. The advantage of this approach is its simplicity compared to other similar ones. The authors claim that their method for generating attention features ensures long-term robustness, which is advantageous considering that other methods that use frame pairs and no recurrent components do not seem to work as well for very long-term sequences.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> This paper presents to the best of our knowledge the first end-to-end object tracking approach which directly maps from raw sensor input to object tracks in sensor space without requiring any feature engineering or system identification in the form of plant or sensor models. Specifically, our system accepts a stream of raw sensor data at one end and, in real-time, produces an estimate of the entire environment state at the output including even occluded objects. We achieve this by framing the problem as a deep learning task and exploit sequence models in the form of recurrent neural networks to learn a mapping from sensor measurements to object tracks. In particular, we propose a learning method based on a form of input dropout which allows learning in an unsupervised manner, only based on raw, occluded sensor data without access to ground-truth annotations. We demonstrate our approach using a synthetic dataset designed to mimic the task of tracking objects in 2D laser data -- as commonly encountered in robotics applications -- and show that it learns to track many dynamic objects despite occlusions and the presence of sensor noise. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> The majority of existing solutions to the Multi-Target Tracking (MTT) problem do not combine cues over a long period of time in a coherent fashion. In this paper, we present an online method that encodes long-term temporal dependencies across multiple cues. One key challenge of tracking methods is to accurately track occluded targets or those which share similar appearance properties with surrounding objects. To address this challenge, we present a structure of Recurrent Neural Networks (RNN) that jointly reasons on multiple cues over a temporal window. Our method allows to correct data association errors and recover observations from occluded states. We demonstrate the robustness of our data-driven approach by tracking multiple targets using their appearance, motion, and even interactions. Our method outperforms previous works on multiple publicly available datasets including the challenging MOT benchmark. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> In this paper, we propose an efficient vehicle trajectory prediction framework based on recurrent neural network. Basically, the characteristic of the vehicle's trajectory is different from that of regular moving objects since it is affected by various latent factors including road structure, traffic rules, and driver's intention. Previous state of the art approaches use sophisticated vehicle behavior model describing these factors and derive the complex trajectory prediction algorithm, which requires a system designer to conduct intensive model optimization for practical use. Our approach is data-driven and simple to use in that it learns complex behavior of the vehicles from the massive amount of trajectory data through deep neural network model. The proposed trajectory prediction method employs the recurrent neural network called long short-term memory (LSTM) to analyze the temporal behavior and predict the future coordinate of the surrounding vehicles. The proposed scheme feeds the sequence of vehicles' coordinates obtained from sensor measurements to the LSTM and produces the probabilistic information on the future location of the vehicles over occupancy grid map. The experiments conducted using the data collected from highway driving show that the proposed method can produce reasonably good estimate of future trajectory. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> To safely and efficiently navigate through complex traffic scenarios, autonomous vehicles need to have the ability to predict the future motion of surrounding vehicles. Multiple interacting agents, the multi-modal nature of driver behavior, and the inherent uncertainty involved in the task make motion prediction of surrounding vehicles a challenging problem. In this paper, we present an LSTM model for interaction aware motion prediction of surrounding vehicles on freeways. Our model assigns confidence values to maneuvers being performed by vehicles and outputs a multi-modal distribution over future motion based on them. We compare our approach with the prior art for vehicle motion prediction on the publicly available NGSIM US-101 and I-80 datasets. Our results show an improvement in terms of RMS values of prediction error. We also present an ablative analysis of the components of our proposed model and analyze the predictions made by the model in complex traffic scenarios. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> Multi-Object Tracking (MOT) is a challenging task in the complex scene such as surveillance and autonomous driving. In this paper, we propose a novel tracklet processing method to cleave and re-connect tracklets on crowd or long-term occlusion by Siamese Bi-Gated Recurrent Unit (GRU). The tracklet generation utilizes object features extracted by CNN and RNN to create the high-confidence tracklet candidates in sparse scenario. Due to mis-tracking in the generation process, the tracklets from different objects are split into several sub-tracklets by a bidirectional GRU. After that, a Siamese GRU based tracklet re-connection method is applied to link the sub-tracklets which belong to the same object to form a whole trajectory. In addition, we extract the tracklet images from existing MOT datasets and propose a novel dataset to train our networks. The proposed dataset contains more than 95160 pedestrian images. It has 793 different persons in it. On average, there are 120 images for each person with positions and sizes. Experimental results demonstrate the advantages of our model over the state-of-the-art methods on MOT16. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> In recent deep online and near-online multi-object tracking approaches, a difficulty has been to incorporate long-term appearance models to efficiently score object tracks under severe occlusion and multiple missing detections. In this paper, we propose a novel recurrent network model, the Bilinear LSTM, in order to improve the learning of long-term appearance models via a recurrent network. Based on intuitions drawn from recursive least squares, Bilinear LSTM stores building blocks of a linear predictor in its memory, which is then coupled with the input in a multiplicative manner, instead of the additive coupling in conventional LSTM approaches. Such coupling resembles an online learned classifier/regressor at each time step, which we have found to improve performances in using LSTM for appearance modeling. We also propose novel data augmentation approaches to efficiently train recurrent models that score object tracks on both appearance and motion. We train an LSTM that can score object tracks based on both appearance and motion and utilize it in a multiple hypothesis tracking framework. In experiments, we show that with our novel LSTM model, we achieved state-of-the-art performance on near-online multiple object tracking on the MOT 2016 and MOT 2017 benchmarks. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> This paper presents a novel approach for tracking static and dynamic objects for an autonomous vehicle operating in complex urban environments. Whereas traditional approaches for tracking often feature numerous hand-engineered stages, this method is learned end-to-end and can directly predict a fully unoccluded occupancy grid from raw laser input. We employ a recurrent neural network to capture the state and evolution of the environment, and train the model in an entirely unsupervised manner. In doing so, our use case compares to model-free, multi-object tracking although we do not explicitly perform the underlying data-association process. Further, we demonstrate that the underlying representation learned for the tracking task can be leveraged via inductive transfer to train an object detector in a data efficient manner. We motivate a number of architectural features and show the positive contribution of dilated convolutions, dynamic and static memory units to the task of tracking and classifying complex... <s> BIB007
|
Generally, methods that are based on non-recurrent CNN-only approaches are best suited to handle short scenes where quick reactions are required in a brief situation that can be captured in a limited number of frames. Various literature studies show that LSTM-based methods have more potential to ensure the proper handling of long-term dependencies while avoiding various mathematical pitfalls such as network parameters that end up having extremely small values because of repeated divisions (e.g. the "vanishing gradient" problem) which in practice manifests as a mis-trained network resulting in drift effects and false positives. Handling long-term dependencies means having to deal with occlusions to a greater extent than in shorter term scenarios. Most approaches combine various classifiers which handle spatial and shape-based classification with LSTM components which account for temporal coherence. An early example of an RNN implementation is which uses an LSTM-based classifier to track objects in time, across multiple frames ( Figure 4 ). The authors demonstrate that an LSTM-based approach is better suited to removing and reinserting candidate observations to account for objects that leave/reenter the visible area of the scene. This provides a solution to the track initiation and termination problem based on data associations found in features obtained from the LSTM layers. This concept is exploited further by BIB002 where various cues are determined to assess long-term dependencies using a dual LSTM network. One LSTM component tracks motion, while the other handles interactions, and the two are combined to compute similarity scores between frames. The results show that using recurrent components to lengthy sequences produces more reliable results than other methods which are based on frame pairs. Some implementations using LSTM focus on tracking-while-driving problems, which pose additional challenges compared to most established benchmarks which use static cameras. As an alternative to most related approaches which attempt to create models of vehicle behavior, BIB003 circumvent the need for vehicle modeling by directly inputting sensor measurements into an LSTM network to predict future vehicle positions and to analyze temporal behavior. A more elaborate attempt is BIB004 where instead of raw sensor data, the authors establish several maneuver classes and feed maneuver sequences to LSTM layers in order to generate probabilities for the occurrence of future maneuver instances. Eventually, multiple such maneuvers can be used to construct the trajectory and/or anticipate the intentions of the vehicles. Furthermore, increasing the length of the sequence increases accuracy and stability over time, up to a certain limit where the network saturates and no longer improves. A solution to this problem would be to split the features into multiple sub-features, followed by reconnecting them to form more coherent long-term trajectories. This is achieved in BIB005 where a combined CNN and RNNbased feature extractor generates tracklets over lengthy sequences. The tracklets are split on frames which contain occlusions, while a recombination mechanism based on gated recurrent units (GRUs) recombines the tracklet pieces according to their similarities, followed by the reconstruction of the complete trajectory using polynomial curve fitting. Some authors do further modifications to LSTM layers to produce classifiers that generate abstract high-level features such as those found in appearance models. A good example in this sense is BIB006 where LSTM layers are modified to do multiplication operations and use customized gating schemes between the recurrent hidden state and the derived features. The newly-obtained LSTM layers are Figure 4 : An LSTM-based architecture used for temporal prediction better at producing appearance-related features than conventional LSTMs which excel at motion prediction. Where trajectory estimation is concerned, LSTM-based methods exploit the gating that takes place in the recurrent layers, as opposed to regular RNNs which pass candidate features into the next recurrent iteration without discriminating between them. The filters inherently present in gated LSTMs have the potential to eliminate unwanted feature candidates which, in actual use cases, may represent unwanted trajectory paths, while maintaining candidates which will eventually lead to correctly-estimated motion cues. Furthermore, LSTMs demonstrate an inherent capability to predict trajectories that are interrupted by occlusion events or by reduced acquisition capabilities. This idea is exploited in order to find solutions to the problem of estimating the layout of a full environment from limited sensor data, a concept referred to in the related literature as "seeing beyond seeing" BIB001 . Given a set of sensors with limited capability, the idea is to perform end-to-end tracking using raw sensor data without the need to explicitly identify high-level features or to have a preexisting detailed model of the environment. In this sense, recurrent architectures have the potential to predict and reconstruct occluded parts of a particular scene from incomplete or partial raw sensor output. The network is trained with partial data and it is updated through a mapping mechanism that makes associations with an unoccluded scene. Subsequently, the recurrent layers make their own internal associations and become capable of filling in the missing gaps that the sensors have been unable to acquire. Specifically, given a hidden state of the world which is not directly captured by any sensor, an RNN is trained using sequences of partial observations in an attempt to update its belief concerning the hidden parts of the world. The resulting information is used to "unocclude" the scene which was initially only partially perceived through limited sensor data. Upon training, the network is capable of defining its own interpretation of the hidden state of the scene. The previouslymentioned result is elaborated upon by a group which includes the same authors BIB007 . A similar approach previously applied in basic robot guidance is extended for use in assisted driving. In this case more complex information can be inferred from raw sensor input, in the form of occupancy maps, which together with a deep network-based architecture allow for predicting the probabilities of obstacle presence even in occluded portions within the field of view.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking groundtruths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify target in each domain. We train each domain in the network iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance in existing tracking benchmarks. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data b ... <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> In recent years, Discriminative Correlation Filter (DCF) based methods have significantly advanced the state-of-the-art in tracking. However, in the pursuit of ever increasing tracking performance, their characteristic speed and real-time capability have gradually faded. Further, the increasingly complex models, with massive number of trainable parameters, have introduced the risk of severe over-fitting. In this work, we tackle the key causes behind the problems of computational complexity and over-fitting, with the aim of simultaneously improving both speed and performance. We revisit the core DCF formulation and introduce: (i) a factorized convolution operator, which drastically reduces the number of parameters in the model, (ii) a compact generative model of the training sample distribution, that significantly reduces memory and time complexity, while providing better diversity of samples, (iii) a conservative model update strategy with improved robustness and reduced complexity. We perform comprehensive experiments on four benchmarks: VOT2016, UAV123, OTB-2015, and TempleColor. When using expensive deep features, our tracker provides a 20-fold speedup and achieves a 13.0% relative gain in Expected Average Overlap compared to the top ranked method [12] in the VOT2016 challenge. Moreover, our fast variant, using hand-crafted features, operates at 60 Hz on a single CPU, while obtaining 65.0% AUC on OTB-2015. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> Convolutional neural network (CNN) has drawn increasing interest in visual tracking owing to its powerfulness in feature extraction. Most existing CNN-based trackers treat tracking as a classification problem. However, these trackers are sensitive to similar distractors because their CNN models mainly focus on inter-class classification. To address this problem, we use self-structure information of object to distinguish it from distractors. Specifically, we utilize recurrent neural network (RNN) to model object structure, and incorporate it into CNN to improve its robustness to similar distractors. Considering that convolutional layers in different levels characterize the object from different perspectives, we use multiple RNNs to model object structure in different levels respectively. Extensive experiments on three benchmarks, OTB100, TC-128 and VOT2015, show that the proposed algorithm outperforms other methods. Code is released at www.dabi.temple.edu/hbling/code/SANet/SANet.html. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> Recent algorithmic improvements and hardware breakthroughs resulted in a number of success stories in the field of AI impacting our daily lives. However, despite its ubiquity AI is only just starting to make advances in what may arguably have the largest impact thus far, the nascent field of autonomous driving. In this work we discuss this important topic and address one of crucial aspects of the emerging area, the problem of predicting future state of autonomous vehicle's surrounding necessary for safe and efficient operations. We introduce a deep learning-based approach that takes into account current state of traffic actors and produces rasterized representations of each actor's vicinity. The raster images are then used by deep convolutional models to infer future movement of actors while accounting for inherent uncertainty of the prediction task. Extensive experiments on real-world data strongly suggest benefits of the proposed approach. Moreover, following successful tests the system was deployed to a fleet of autonomous vehicles. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> In this review, we provide an overview of emerging trends and challenges in the field of intelligent and autonomous, or self-driving, vehicles. Recent advances in the field of perception, planning, and decision-making for autonomous vehicles have led to great improvements in functional capabilities, with several prototypes already driving on our roads and streets. Yet challenges remain regarding guaranteed performance and safety under all driving circumstances. For instance, planning methods that provide safe and system-compliant performance in complex, cluttered environments while modeling the uncertain interaction with other traffic participants are required. Furthermore, new paradigms, such as interactive planning and end-to-end learning, open up questions regarding safety and reliability that need to be addressed. In this survey, we emphasize recent approaches for integrated perception and planning and for behavior-aware planning, many of which rely on machine learning. This raises the question of ver... <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> Predicting trajectories of pedestrians is quintessential for autonomous robots which share the same environment with humans. In order to effectively and safely interact with humans, trajectory prediction needs to be both precise and computationally efficient. In this work, we propose a convolutional neural network (CNN) based human trajectory prediction approach. Unlike more recent LSTM-based moles which attend sequentially to each frame, our model supports increased parallelism and effective temporal representation. The proposed compact CNN model is faster than the current approaches yet still yields competitive results. <s> BIB007 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> Autonomous driving is a challenging multiagent domain which requires optimizing complex, mixed cooperative-competitive interactions. Learning to predict contingent distributions over other vehicles' trajectories simplifies the problem, allowing approximate solutions by trajectory optimization with dynamic constraints. We take a model-based approach to prediction, in order to make use of structured prior knowledge of vehicle kinematics, and the assumption that other drivers plan trajectories to minimize an unknown cost function. We introduce a novel inverse optimal control (IOC) algorithm to learn other vehicles' cost functions in an energy-based generative model. Langevin Sampling, a Monte Carlo based sampling algorithm, is used to directly sample the control sequence. Our algorithm provides greater flexibility than standard IOC methods, and can learn higher-level, non-Markovian cost functions defined over entire trajectories. We extend weighted feature-based cost functions with neural networks to obtain NN-augmented cost functions, which combine the advantages of both model-based and model-free learning. Results show that model-based IOC can achieve state-of-the-art vehicle trajectory prediction accuracy, and naturally take scene information into account. <s> BIB008
|
Most of the results from the available literature focus on generating abstract, high-level features of the observations found in the processed images, since, generally, the more abstract the feature the more robust it should be to transformations, noise, drift and other undesired artifacts and effects. Most authors rely on an arrangement of CNNs where each component has a distinct role in the system, such as learning appearance models, geometric and spatial patterns, of learning temporal dependencies. It is worthy of noting that a strictly CNN-based method needs substantial tweaking and careful parameter adjustment before it can accomplish the complex task of consistent detection in space and across multiple frames. A system made up of multiple networks, each with its own purpose, is also difficult to properly train, requiring lots of data and having a grater risk of overfitting. However, complex, customized CNN solutions still seem to provide the best accuracies within the current state-of-the-art. Most such results also use frame pairs, or only a few elements from the video sequence, thereby making them unreliable for long-term tracking. LSTM-based architectures seem to show more promising results for ensuring long-term temporal coherence, since this is what they were designed for, while also being simpler to implement and train. For the purposes of autonomous driving, an LSTM-based method shows promise, considering that training should happen offline and that a heavily-optimized solution is needed to achieve a realtime response. Designing such a system also requires a fair amount of trial-and error since currently there is no well established manner to predict which network architecture is suited to a particular purpose. There are also very few solutions based on reinforcement learning for object tracking, especially considering that reinforcement learning has gained substantial momentum in automotive decision making problems. Other less popular but promising solutions, such as GAN-based predictors, may be worthy of further study and experimentation. One particularly promising direction for automotive tracking are solutions that make use of limited sensor data and that are able to efficiently predict the surrounding environment without requiring a full representation or reconstruction of the scene. These approaches circumvent the need for lengthy video sequences, heavy image processing and the computation of complicated object features while being especially designed to handle occlusion and objects outside of the immediate field of view. As such, where automotive tracking is concerned, the available results from the state-of-the art seem to suggest that an effective solution would make use of partial data while being able to handle temporal correlations across lengthy sequences using an LSTM component. As of yet, solutions based on deep neural networks show the most promise since they offer the most robust features while being natively designed to solve focus-and-context problems in video sequences. In this sense, the results which seem most promising for the complex tracking problems described in this section are BIB001 , BIB002 , BIB003 and BIB004 . Rule-based approaches to vehicle interaction are rather inflexible; they require a great effort to engineer and validate, and they usually generalize poorly to new scenarios BIB008 . Learning-based approaches are promising because of the complexity of driving interactions, and the need for generalization. However, learning-based systems require a large amount of data to cover the space of interactive behaviors. Because they capture the generative structure of vehicle trajectories, model-based methods can potentially learn more, from less data, than model-free methods. However, good cost functions are challenging to learn, and simple, hand-crafted representations may not generalize well across tasks and contexts. In general, model-based methods can be less flexible, and may underperform model-free methods in the limit of infinite data. Model-free methods take a data-driven approach, aiming to learn predictive distributions over trajectories directly from data. These approaches are more flexible and require less knowledge engineering in terms of the type of vehicles, maneuvers, and scenarios, but the amount of data they require may be prohibitive BIB008 . Manually designed engineered models often impose unrealistic assumptions not supported by the data, e.g., that traffic always follows lanes, which motivated the use of learned models as an alternative. A large class of learned models are maneuver-based models, e.g., using hidden Markov models, which are object-centric approaches that predict the discrete actions of each object independently. Often, the independence assumption is not true, which is mitigated by the use of Bayesian networks that are computationally more expensive and not feasible for real-time tasks BIB005 . Gaussian Process regression can also be used to address the motion prediction problem. It has desirable properties such as the ability to quantify uncertainty, but it is limited when modeling complex actor-environment interactions BIB005 . Although it is possible to do multi-step prediction with a Kalman filter, it cannot be extended far into the future with reasonable accuracy. A multi-step prediction done solely by a Kalman filter was found to be accurate up until 10-15 timesteps, after which the predictions diverged and the full 40 timesteps prediction ended up being worse than constant velocity inference . This emphasizes the advantages of data-driven approaches, as it is possible to observe almost an infinite number of variables which may all affect the driver, whereas the Kalman filter relies solely on the physical movement of the vehicle. The data may also be a part of the problem, because the network learns what is present in the data, and hopefully generalizes well, but there may always be situations where the humans do not behave according to previous observations. This is one drawback of using neural networks. However, it seems that the advantages of using a data-driven approach outperform the disadvatages. Because of the time constraints of real-time systems, some authors use simpler feed-forward CNN architectures for prediction BIB005 . In general, deep CNNs as robust, flexible, high-capacity function approximators, are able to model the complex relationship between sensory input and reward structure very well. Additionally, due to the convolutional operators, they are able to capture spatial correlations in the data BIB006 . Some authors BIB007 state that CNNs are superior to LSTMs for temporal modeling since trajectories are continuous in nature, do not have complicated "state", and have high spatial and temporal correlations which can be exploited by computationally efficient convolution operations. Another approach is to learn policies from expert demonstrations by estimating the expert's cost function with inverse reinforcement learning and then extract a policy from that cost function BIB006 . However, this is often inefficient for real-time applications BIB005 . Finally, it should be mentioned that in this section, we have addressed the trajectory prediction problem. A related, but distinct problem, is trajectory planning, i.e. finding an optimal path from the current location to a given goal location. Its aim is to produce smooth trajectories with small changes in curvature, so as to minimize both the lateral and the longitudinal acceleration of the ego vehicle. For this purpose, there are several methods reported in the literature, e.g. using cubic spline interpolation, trigonometric spline interpolation, Bézier curves, or clothoids, i.e. curves with a complex mathematical definition, which have a linear relation between the curvature and the arc length and allow smooth transitions from a straight line to a circle arc or vice versa.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> We introduce a computationally efficient algorithm for multi-object tracking by detection that addresses four main challenges: appearance similarity among targets, missing data due to targets being out of the field of view or occluded behind other objects, crossing trajectories, and camera motion. The proposed method uses motion dynamics as a cue to distinguish targets with similar appearance, minimize target mis-identification and recover missing data. Computational efficiency is achieved by using a Generalized Linear Assignment (GLA) coupled with efficient procedures to recover missing data and estimate the complexity of the underlying dynamics. The proposed approach works with track lets of arbitrary length and does not assume a dynamical model a priori, yet it captures the overall motion dynamics of the targets. Experiments using challenging videos show that this framework can handle complex target motions, non-stationary cameras and long occlusions, on scenarios where appearance cues are not available or poor. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Many recent advances in multiple target tracking aim at finding a (nearly) optimal set of trajectories within a temporal window. To handle the large space of possible trajectory hypotheses, it is typically reduced to a finite set by some form of data-driven or regular discretization. In this work, we propose an alternative formulation of multitarget tracking as minimization of a continuous energy. Contrary to recent approaches, we focus on designing an energy that corresponds to a more complete representation of the problem, rather than one that is amenable to global optimization. Besides the image evidence, the energy function takes into account physical constraints, such as target dynamics, mutual exclusion, and track persistence. In addition, partial image evidence is handled with explicit occlusion reasoning, and different targets are disambiguated with an appearance model. To nevertheless find strong local minima of the proposed nonconvex energy, we construct a suitable optimization scheme that alternates between continuous conjugate gradient descent and discrete transdimensional jump moves. These moves, which are executed such that they always reduce the energy, allow the search to escape weak minima and explore a much larger portion of the search space of varying dimensionality. We demonstrate the validity of our approach with an extensive quantitative evaluation on several public data sets. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> This paper presents a monocular and purely vision based pedestrian trajectory tracking and prediction framework with integrated map-based hazard inference. In Advanced Driver Assistance systems research, a lot of effort has been put into pedestrian detection over the last decade, and several pedestrian detection systems are indeed showing impressive results. Considerably less effort has been put into processing the detections further. We present a tracking system for pedestrians, which based on detection bounding boxes tracks pedestrians and is able to predict their positions in the near future. The tracking system is combined with a module which, based on the car's GPS position acquires a map and uses the road information in the map to know where the car can drive. Then the system warns the driver about pedestrians at risk, by combining the information about hazardous areas for pedestrians with a probabilistic position prediction for all observed pedestrians. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Online Multi-Object Tracking (MOT) has wide applications in time-critical video analysis scenarios, such as robot navigation and autonomous driving. In tracking-by-detection, a major challenge of online MOT is how to robustly associate noisy object detections on a new video frame with previously tracked objects. In this work, we formulate the online MOT problem as decision making in Markov Decision Processes (MDPs), where the lifetime of an object is modeled with a MDP. Learning a similarity function for data association is equivalent to learning a policy for the MDP, and the policy learning is approached in a reinforcement learning fashion which benefits from both advantages of offline-learning and online-learning for data association. Moreover, our framework can naturally handle the birth/death and appearance/disappearance of targets by treating them as state transitions in the MDP while leveraging existing online single object tracking methods. We conduct experiments on the MOT Benchmark [24] to verify the effectiveness of our method. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Online Multiple Target Tracking (MTT) is often addressed within the tracking-by-detection paradigm. Detections are previously extracted independently in each frame and then objects trajectories are built by maximizing specifically designed coherence functions. Nevertheless, ambiguities arise in presence of occlusions or detection errors. In this paper we claim that the ambiguities in tracking could be solved by a selective use of the features, by working with more reliable features if possible and exploiting a deeper representation of the target only if necessary. To this end, we propose an online divide and conquer tracker for static camera scenes, which partitions the assignment problem in local subproblems and solves them by selectively choosing and combining the best features. The complete framework is cast as a structural learning task that unifies these phases and learns tracker parameters from examples. Experiments on two different datasets highlights a significant improvement of tracking performances (MOTA +10%) over the state of the art. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> This paper revisits the classical multiple hypotheses tracking (MHT) algorithm in a tracking-by-detection framework. The success of MHT largely depends on the ability to maintain a small list of potential hypotheses, which can be facilitated with the accurate object detectors that are currently available. We demonstrate that a classical MHT implementation from the 90's can come surprisingly close to the performance of state-of-the-art methods on standard benchmark datasets. In order to further utilize the strength of MHT in exploiting higher-order information, we introduce a method for training online appearance models for each track hypothesis. We show that appearance models can be learned efficiently via a regularized least squares framework, requiring only a few extra operations for each hypothesis branch. We obtain state-of-the-art results on popular tracking-by-detection datasets such as PETS and the recent MOT challenge. <s> BIB007 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> This paper explores a pragmatic approach to multiple object tracking where the main focus is to associate objects efficiently for online and realtime applications. To this end, detection quality is identified as a key factor influencing tracking performance, where changing the detector can improve tracking by up to 18.9%. Despite only using a rudimentary combination of familiar techniques such as the Kalman Filter and Hungarian algorithm for the tracking components, this approach achieves an accuracy comparable to state-of-the-art online trackers. Furthermore, due to the simplicity of our tracking method, the tracker updates at a rate of 260 Hz which is over 20x faster than other state-of-the-art trackers. <s> BIB008 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> This paper proposes an alternative formulation to the pure pursuit path tracking algorithm for autonomous driving. The current approach has tendencies to cut corners, and therefore results in poor path tracking accuracy. The proposed method considers not only the relative position of the pursued point, but also the orientation of the path at that point. A steering control law is designed in accordance with the kinematic equations of motion of the vehicle. The effectiveness of the algorithm is then tested by implementing it on an autonomous golf cart, driving in a pedestrian environment. The experimental result shows that the new algorithm reduces the root mean square (RMS) cross track error for the same given pre-programmed path by up to 46 percent, while having virtually no extra computational cost, and still maintaining the chatter free property of the original pure pursuit controller. <s> BIB009 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> To help accelerate progress in multi-target, multi-camera tracking systems, we present (i) a new pair of precision-recall measures of performance that treats errors of all types uniformly and emphasizes correct identification over sources of error; (ii) the largest fully-annotated and calibrated data set to date with more than 2 million frames of 1080 p, 60 fps video taken by 8 cameras observing more than 2,700 identities over 85 min; and (iii) a reference software system as a comparison baseline. We show that (i) our measures properly account for bottom-line identity match performance in the multi-camera setting; (ii) our data set poses realistic challenges to current trackers; and (iii) the performance of our system is comparable to the state of the art. <s> BIB010 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Many state-of-the-art approaches to multi-object tracking rely on detecting them in each frame independently, grouping detections into short but reliable trajectory segments, and then further grouping them into full trajectories. This grouping typically relies on imposing local smoothness constraints but almost never on enforcing more global ones on the trajectories.,,In this paper, we propose a non-Markovian approach to imposing global consistency by using behavioral patterns to guide the tracking algorithm. When used in conjunction with state-of-the-art tracking algorithms, this further increases their already good performance on multiple challenging datasets. We show significant improvements both in supervised settings where ground truth is available and behavioral patterns can be learned from it, and in completely unsupervised settings. <s> BIB011 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Tracking-by-detection is a common approach to multi-object tracking. With ever increasing performances of object detectors, the basis for a tracker becomes much more reliable. In combination with commonly higher frame rates, this poses a shift in the challenges for a successful tracker. That shift enables the deployment of much simpler tracking algorithms which can compete with more sophisticated approaches at a fraction of the computational cost. We present such an algorithm and show with thorough experiments its potential using a wide range of object detectors. The proposed method can easily run at 100K fps while outperforming the state-of-the-art on the DETRAC vehicle tracking dataset. <s> BIB012 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Reliable prediction of surround vehicle motion is a critical requirement for path planning for autonomous vehicles. In this paper, we propose a unified framework for surround vehicle maneuver classification and motion prediction that exploits multiple cues, namely, the estimated motion of vehicles, an understanding of typical motion patterns of freeway traffic and intervehicle interaction. We report our results in terms of maneuver classification accuracy and mean and median absolute error of predicted trajectories against the ground truth for real traffic data collected using vehicle mounted sensors on freeways. An ablative analysis is performed to analyze the relative importance of each cue for trajectory prediction. Additionally, an analysis of execution time for the components of the framework is presented. Finally, we present multiple case studies analyzing the outputs of our model for complex traffic scenarios. <s> BIB013 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> This paper introduces geometry and object shape and pose costs for multi-object tracking in urban driving scenarios. Using images from a monocular camera alone, we devise pairwise costs for object tracks, based on several 3D cues such as object pose, shape, and motion. The proposed costs are agnostic to the data association method and can be incorporated into any optimization framework to output the pairwise data associations. These costs are easy to implement, can be computed in real-time, and complement each other to account for possible errors in a tracking-by-detection framework. We perform an extensive analysis of the designed costs and empirically demonstrate consistent improvement over the state-of-the-art under varying conditions that employ a range of object detectors, exhibit a variety in camera and object motions, and, more importantly, are not reliant on the choice of the association framework. We also show that, by using the simplest of associations frameworks (two-frame Hungarian assignment), we surpass the state-of-the-art in multi-object-tracking on road scenes. More qualitative and quantitative results can be found at the following URL: this https URL <s> BIB014 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Radar sensor has been an integral part of safety critical applications in automotive industry owing to its weather and lighting independence. The advances in radar hardware technology have made it possible to reliably detect objects using radar. Highly accurate radar sensors are able to give multiple radar detections per object. This work presents a postprocessing architecture, which is used to cluster and track multiple detections from one object in practical multiple object scenarios. Furthermore, the framework is tested and validated with various driving maneuvers and results are evaluated. <s> BIB015 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Urban-oriented autonomous vehicles require a reliable perception technology to tackle the high amount of uncertainties. The recently introduced compact 3D LIDAR sensor offers a surround spatial information that can be exploited to enhance the vehicle perception. We present a real-time integrated framework of multi-target object detection and tracking using 3D LIDAR geared toward urban use. Our approach combines sensor occlusion-aware detection method with computationally efficient heuristics rule-based filtering and adaptive probabilistic tracking to handle uncertainties arising from sensing limitation of 3D LIDAR and complexity of the target object movement. The evaluation results using real-world pre-recorded 3D LIDAR data and comparison with state-of-the-art works shows that our framework is capable of achieving promising tracking performance in the urban situation. <s> BIB016 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> The problem of tracking multiple objects in a video sequence poses several challenging tasks. For tracking-by-detection, these include object re-identification, motion prediction and dealing with occlusions. We present a tracker (without bells and whistles) that accomplishes tracking without specifically targeting any of these tasks, in particular, we perform no training or optimization on tracking data. To this end, we exploit the bounding box regression of an object detector to predict the position of an object in the next frame, thereby converting a detector into a Tracktor. We demonstrate the potential of Tracktor and provide a new state-of-the-art on three multi-object tracking benchmarks by extending it with a straightforward re-identification and camera motion compensation. We then perform an analysis on the performance and failure cases of several state-of-the-art tracking methods in comparison to our Tracktor. Surprisingly, none of the dedicated tracking methods are considerably better in dealing with complex tracking scenarios, namely, small and occluded objects or missing detections. However, our approach tackles most of the easy tracking scenarios. Therefore, we motivate our approach as a new tracking paradigm and point out promising future research directions. Overall, Tracktor yields superior tracking performance than any current tracking method and our analysis exposes remaining and unsolved tracking challenges to inspire future research directions. <s> BIB017 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Online multi-object tracking (MOT) is extremely important for high-level spatial reasoning and path planning for autonomous and highly-automated vehicles. In this paper, we present a modular framework for tracking multiple objects (vehicles), capable of accepting object proposals from different sensor modalities (vision and range) and a variable number of sensors, to produce continuous object tracks. This work is a generalization of the MDP framework for MOT proposed by Xiang et al. , with some key extensions - First, we track objects across multiple cameras and across different sensor modalities. This is done by fusing object proposals across sensors accurately and efficiently. Second, the objects of interest (targets) are tracked directly in the real world . This is a departure from traditional techniques where objects are simply tracked in the image plane. Doing so allows the tracks to be readily used by an autonomous agent for navigation and related tasks. To verify the effectiveness of our approach, we test it on real world highway data collected from a heavily sensorized testbed capable of capturing full-surround information. We demonstrate that our framework is well-suited to track objects through entire maneuvers around the ego-vehicle, some of which take more than a few minutes to complete. We also leverage the modularity of our approach by comparing the effects of including/excluding different sensors, changing the total number of sensors, and the quality of object proposals on the final tracking result. <s> BIB018
|
The Kalman filter is a popular method with many applications in navigation and control, particularly with regard to predicting the future path of an object, associating multiple objects with their trajectories, while demonstrating significant robustness to noise. Generally, Kalman-based methods are used for simpler tracking, particularly in online scenarios where the tracker only accesses a limited number of frames at a time, possibly only the current and previous ones. An example of the use of the Kalman filter is BIB008 , where a combination of the aforementioned filter and the Munkres algorithm as the min-cost estimator are used in a simple setup focusing on performance. The method requires designing a dynamic model of the tracked objects' motion, and is much more sensitive to the type of detector employed than other approaches, however once such parameters are well established, the simplicity of the algorithms allows for significant real-time performance. Similar methods are frequently used in simple scenarios where a limited number of frames are available and the detections are accurate. In such situations, the simplicity of the implementations allows for quick response times even on low-spec embedded client devices. In the same spirit of providing an easy, straightforward method that works well for simple scenarios, BIB017 provide an approach based on bounding-box regression. Given multiple object bounding boxes in a sequence of frames, the authors develop a regressor which allows the prediction of bounding box positions in subsequent frames. This comes with some limitations, specifically it requires that targets move only slightly from frame to frame, and is therefore reliable in scenarios where the frame rate is high enough and relatively stable. Furthermore, a reliable detector is a must in such situations, and crowded scenes with frequent occlusion events are not handled properly. As with the previous approach, this is well suited for easy cases where robust image acquisition is available and performance and implementation simplicity are a priority. Unfortunately, noisy images are fairly common in automotive scenarios where, for efficiency and cost reasons, a compromise may be made in terms of the quality and performance of the cameras and sensors. It is often desirable that the software be robust to noise so as to minimize the hardware costs. In BIB004 , tracking is done by a particle filter for each track. The authors use the Munkres assignment algorithm between bounding boxes in the current input image and the previous bounding box for each track. A cost matrix is populated with the cost for associating a bounding box with any given previous bounding box: the Euclidean distance between the box centers plus the size change of the box, as a bounding box is expected to be roughly the same size in two consecutive frames. Since boxes move and change size in bigger increments when the actors are close to the camera, the cost is weighted by the inverse of the box size. This approach is simple, but the assignment algorithm has an O(n 3 ) complexity, which is probably too high for real-time tracking. Various attempts exist for improving noise robustness while maintaining performance, for example in BIB005 . In this case, the lifetime of tracked objects is modeled using a Markov Decision Process (MDP). The policy of the MDP is determined using reinforcement learning, whose objective is to learn a similarity function for associating tracked objects. The positions and lifetimes of the objects are modeled using transitions between MDP states. BIB018 also use MDPs in a more generalized scheme, involving multiple sensors and cameras and fusing the results from multiple MDP formulations. Note that Markov models can be limiting when it comes to automotive tracking, since a typical scene with multiple interacting targets does not exhibit the Markov property where the current state only depends on the previous one. In this regard, the related literature features multiple attempts to improve reliability. BIB013 propose an elaborate pipeline featuring multiview tracking, ground plane projection, maneuver recognition and trajectory prediction using an assortment of approaches which include Hidden Markov Models and Variational Gaussian mixture models. Such efforts show that an improvement over traditional algorithms involves sequencing together multiple different methods, each with its own role. As such, there is the risk that the overall resulting approach may be too fragmented and too cumbersome to implement, interpret and improve properly. Works such as BIB011 attempt to circumvent such limitations by proposing alternatives to tried-andtested Markov models, in this case in the form of a system which determines behavioral patterns in an effort to ensure global consistency for tracking results. There are multiple ways to exploit behavior in order to guide the tracking process, for instance by learning and minimizing/maximizing an energy function that associates behavioral patterns to potential trajectory candidates. This concept is also exemplified by BIB002 , who propose a method based on minimizing a continuous energy function aimed at handling the very large space of potential trajectory solutions, considering that a limited, discrete set of behavior patterns impose limitations on the energy function. While such a limitation offers better guarantees that a global optimum will eventually be reached, it may not allow a complete representation of the system. An alternative approach which is also designed to handle occlusions is BIB006 , where the divide-andconquer paradigm is used to partition the solution space into smaller subsets, thereby optimizing the search for the optimal variant. The authors note that while detections and their respective trajectories can be extracted rather efficiently from crowded scenes, the presence of ambiguities induced by occlusion events may raise significant detection errors. The proposed solution involves subdividing the object assignment problem into subproblems, followed by a selective combination of the best features found within the subdivisions ( Figure 5 ). The number and types of the features are variable, thereby accounting for some level of flexibility for this approach. One particular downside is that once the scene changes, the problem itself also changes and the subdivisions need to reoccur and update, therefore making this method unsuitable for scenes acquired from moving cameras. A similar problem is posed in BIB003 , where it is also noted that complex scenes pose tracking difficulties due to occlusion events and similarities among different objects. This issue is handled by subdividing object trajectories into multiple tracklets and subsequently determining a confidence level for each such tracklet, based on its detectability and continuity. Actual trajectories are then formed from tracklets connected based on their confidence values. One advantage of this method in terms of performance is that tracklets can be added to already-determined trajectories in real-time as they become available without requiring complex processing or additional associations. Additionally, linear discriminant analysis is used to differentiate objects based on appearance criteria. The concept of appearance is more extensively exploited by BIB001 , who use motion dynamics to distinguish between targets with similar features. They approach the problem by determining a dynamics-based similarity between tracklets using generalized linear assignment. As such, targets are identified using motion cues, which are complementary to more well established appearance models. While demonstrating adequate performance and accuracy, it is worth mentioning that motion-based features are sensitive to camera movement and are considerably mode difficult to use in automotive situations, Figure 5 : An example of a divide-and-conquer approach which creates associations between detections BIB006 where motion assessment metrics that work well for static cameras may be less reliable when the cameras are in motion and image jittering and shaking occur. The idea of generating appearance models using traditional means is exemplified in BIB007 , who use a combination appearance models learned using a regularized least squares framework and a system for generating potential solution candidates in the form of a set of track hypotheses for each successful detection. The hypotheses are arranges in trees, each of which are scored and selected according to the best fit in terms of providing usable trajectories. An alternative to constructing an elaborate appearance model is proposed by BIB014 , who directly involve the shape and geometry of the detections within the tracking process, therefore using shape-based cost functions instead of ones based on pixel clusters. Furthermore, results focusing on tracking-while-driving problems may opt for a vehicle behavior model, or a kinematic model, as opposed to one that is based on appearance criteria. Examples of such approaches are BIB009 , BIB015 , where the authors build models of vehicle behavior from parameters such as steering angles, headings, offset distances, relative positions etc. Note that kinematic and motion models are generally more suited to situations where the input consists in data from radar, LiDAR or GPS, as opposed to image sequences. In particular, attempting to reconstruct visual information from LiDAR point clouds is not a trivial task and may involve elaborate reconstruction, segmentation and registration preprocessing before a suitable detection and tracking pipeline can be designed BIB016 . Another class of results from related literature follows a different paradigm. Instead of employing complex energy minimization functions and/or statistical modeling, other authors opt for a simpler, faster approach that works with a limited amount of information drawn from the video frames. The motivation is that in some cases the scenarios may be simple enough that a straightforward method that alleviates the need for extended processing may prove just as effective as more complex and elaborate counterparts. An example in this direction is BIB012 whose method is based on scoring detections by determining overlaps between their bounding boxes across multiple consecutive frames. A scoring system is then developed based on these overlaps and, depending on the resulting scores, trajectories are formed from sets of successive overlaps of the same bounding boxes. Such a method does not directly handle crowded scenes, occlusions or fast moving objects whose positions are far apart in consecutive frames, however it may present a suitable compromise in terms of accuracy in scenarios where performance is detrimental and the embedded hardware may not allow for more complex processing. An additional important consideration for this type of problem is how the tracking method is evaluated. Most authors use a common, established set of benchmarks which, while having a certain degree of generality, cannot cover every situation that a vehicle might be found in. As such, some authors such as BIB010 devote their work to developing performance and evaluation metrics and data sets which allow for covering a wide range of potential problems which may arise in MOT scenarios. As such, the choice in the method used for tracking is as much a consequence of the diversity of situations and events claimed to be covered by the method, as it results from the evaluation performed by the authors. For example, as was the case for NN-based methods, most evaluations are done for scenes with static cameras, which are only partly relevant for automotive applications. The advantage of the methods presented thus far lies in the fact that they generally outperform their counterparts in terms of the required processing power and computational resources, which is a plus for vehicle-based tracking where the client device is usually a low-power solution. Furthermore, some methods can be extended rather easily, as the need may be, for instance by incorporating additional features or criteria when assembling trajectories from individual detections, by finding an optimizer that ensures additional robustness, or, as is already the case with some of the previously-mentioned papers, by incorporating a light-weight supervised classifier in order to boost detection and tracking accuracy.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> We analyze the computational problem of multi-object tracking in video sequences. We formulate the problem using a cost function that requires estimating the number of tracks, as well as their birth and death states. We show that the global solution can be obtained with a greedy algorithm that sequentially instantiates tracks using shortest path computations on a flow network. Greedy algorithms allow one to embed pre-processing steps, such as nonmax suppression, within the tracking algorithm. Furthermore, we give a near-optimal algorithm based on dynamic programming which runs in time linear in the number of objects and linear in the sequence length. Our algorithms are fast, simple, and scalable, allowing us to process dense input data. This results in state-of-the-art performance. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> Data association is an essential component of any human tracking system. The majority of current methods, such as bipartite matching, incorporate a limited-temporal-locality of the sequence into the data association problem, which makes them inherently prone to IDswitches and difficulties caused by long-term occlusion, cluttered background, and crowded scenes.We propose an approach to data association which incorporates both motion and appearance in a global manner. Unlike limited-temporal-locality methods which incorporate a few frames into the data association problem, we incorporate the whole temporal span and solve the data association problem for one object at a time, while implicitly incorporating the rest of the objects. In order to achieve this, we utilize Generalized Minimum Clique Graphs to solve the optimization problem of our data association method. Our proposed method yields a better formulated approach to data association which is supported by our superior results. Experiments show the proposed method makes significant improvements in tracking in the diverse sequences of Town Center [1], TUD-crossing [2], TUD-Stadtmitte [2], PETS2009 [3], and a new sequence called Parking Lot compared to the state of the art methods. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> We cast the problem of tracking several people as a graph partitioning problem that takes the form of an NP-hard binary integer program. We propose a tractable, approximate, online solution through the combination of a multi-stage cascade and a sliding temporal window. Our experiments demonstrate significant accuracy improvement over the state of the art and real-time post-detection performance. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> Multi-target tracking is an interesting but challenging task in computer vision field. Most previous data association based methods merely consider the relationships (e.g. appearance and motion pattern similarities) between detections in local limited temporal domain, leading to their difficulties in handling long-term occlusion and distinguishing the spatially close targets with similar appearance in crowded scenes. In this paper, a novel data association approach based on undirected hierarchical relation hypergraph is proposed, which formulates the tracking task as a hierarchical dense neighborhoods searching problem on the dynamically constructed undirected affinity graph. The relationships between different detections across the spatiotemporal domain are considered in a high-order way, which makes the tracker robust to the spatially close targets with similar appearance. Meanwhile, the hierarchical design of the optimization process fuels our tracker to long-term occlusion with more robustness. Extensive experiments on various challenging datasets (i.e. PETS2009 dataset, ParkingLot), including both low and high density sequences, demonstrate that the proposed method performs favorably against the state-of-the-art methods. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the “tracking-by-detection” paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> The past decade has witnessed significant progress in object detection and tracking in videos. In this paper, we present a collaborative model between a pre-trained object detector and a number of single-object online trackers within the particle filtering framework. For each frame, we construct an association between detections and trackers, and treat each detected image region as a key sample, for online update, if it is associated to a tracker. We present a motion model that incorporates the associated detections with object dynamics. Furthermore, we propose an effective sample selection scheme to update the appearance model of each tracker. We use discriminative and generative appearance models for the likelihood function and data association, respectively. Experimental results show that the proposed scheme generally outperforms state-of-the-art methods. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> The majority of Multi-Object Tracking (MOT) algorithms based on the tracking-by-detection scheme do not use higher order dependencies among objects or tracklets, which makes them less effective in handling complex scenarios. In this work, we present a new near-online MOT algorithm based on non-uniform hypergraph, which can model different degrees of dependencies among tracklets in a unified objective. The nodes in the hypergraph correspond to the tracklets and the hyperedges with different degrees encode various kinds of dependencies among them. Specifically, instead of setting the weights of hyperedges with different degrees empirically, they are learned automatically using the structural support vector machine algorithm (SSVM). Several experiments are carried out on various challenging datasets (i.e., PETS09, ParkingLot sequence, SubwayFace, and MOT16 benchmark), to demonstrate that our method achieves favorable performance against the state-of-the-art MOT methods. <s> BIB007
|
A significant number of results from the related literature present the tracking solution as a graph search problem or otherwise model the tracking scene using a dependency graph or flow model. There are multiple advantages to using such an approach: graph-based models tailor well to the multitracking problem since, like a graph, it is formed from inter-related nodes each with a distinct set of parameter values. The relationships that can be determined among tracked objects or a set of trajectory candidates can be modeled using edges with edge costs. Graph theory is well understood and graph traversal and search algorithms can be widely found, with implementations readily available on most platforms. Likewise, flow models can be seen as an alternative interpretation of graphs, with node dependencies modeled through operators and dependency functions, forming an interconnected system. Unlike a traditional graph, data from a flow model progresses in an established direction which starts from initial components where acquired data is handled as input; the data then traverses intermediate nodes where it is processed in some manner and ends up at terminal nodes where the results are obtained and exploited. Like graphs, flow models allow for loops which implement refinement techniques and in-depth processing via multiple local iterations. Most methods which exploit graphs and flow models attempt to solve the tracking problem using a minimum path or minimum cost -type approach. An example in this sense is BIB005 , where multiobject tracking is modeled using a network flow model subjected to min-cost optimization. Each path through the flow model represents a potential trajectory, formed by concatenating individual detections from each frame. Occlusion events are modeled as multiple potential directions arising from the occlusion node and the proposed solution handles the resulting ambiguities by incorporating pairwise costs into the flow network. A more straightforward solution is presented by , who solve multi-tracking using dynamic programming and formulate the scenario as a linear program. They subsequently handle the large number of resulting variables and constraints using k-shortest paths. One advantage of this method seems to be that it allows for reliable tracking from only four overlapping low resolution low fps video streams, which is in line with the cost-effectiveness required by automotive applications. Another related solution is BIB001 , where a cost function is developed from estimating the number of potential trajectories as well as their origins and end frames. Then, the scenario is handled as a shortest-path problem in a graph which the authors solve using a greedy algorithm. This approach has the advantage that it uses well-established methods, therefore affording some level of simplicity to understanding and implementing the algorithms. In BIB003 , a similar graph-based solution divides the problem into multiple subproblems by exploring several graph partitioning mechanisms and uses greedy search based on Adaptive Label Iterative Conditional Modes. Partitioning allows for successful disassociation of object identities in circumstances where said identities might be confused with one another. Also, methods based on solution space partitioning have the advantage of being highly scalable, therefore allowing fine tuning of their parameters in order to achieve a trade-off between accuracy and performance. Multiple extensions of the graph-based problem exists in the related literature, for instance when multiple other criteria are incorporated into the search method. BIB002 incorporate appearance and motion-based cues into their data association mechanism, which is modeled using a global graph representation and makes use of Generalized Minimum Clique Graphs to locate representative tracklets in each frame. Among other advantages, this allows for a longer time span to be handled, albeit for each object individually. Another related approach is provided in BIB006 , where the solution consists in a collaborative model which makes use of a detector and multiple individual trackers, whose interdependencies are determined by finding associations with key samples from each detected region in the processed frames. These interdependencies are further exploited via a sample selection method to generate and update appearance models for each tracker. As extensions of the more traditional graph-based models which use greedy algorithms to search for suitable candidate solutions and update the resulting models in subsequent processing steps, Figure 6 : Generation of trajectories by determining higher order dependencies between tracklets via a hypergraph model with edge shapes determined using a learning method BIB007 some authors handle the problem using hypergraphs. These extend the concept of classical graphs by generalizing the role of graph edges. In a conventional graph an edge joins two nodes, while in a hypergraph edges are sets of arbitrary combinations of nodes. Therefore an edge in a hypergraph connects to multiple nodes, instead of just two as in the traditional case. This structure has the potential to form more extensive and complete models using a singular unified concept and to alleviate the need for costly solution space partitioning or subdivision mechanisms. Another use of the hypergraph concept is provided by BIB004 , who build a hypergraph-based model to generate meaningful data associations capable of handling the problem of targets with similar appearance and in close proximity to one-another, a situation frequently encountered in crowded scenes. The hypergraph model allows for the formulation of higher-order relationships among various detections, which, as mentioned in previous sections, have the potential to ensure robustness against simple transformations, noise and various other spatial and temporal inaccuracies. The method is based on grouping dense neighborhoods of tracklets hierarchically, forming multiple layers which enable more fine-grained descriptions of the relationships that exists in each such neighborhood. A related but much more recent result BIB007 is also based on the notion that hypergraphs allow for determining higher order dependencies among tracklets, but in this case the parameters of the hypergraph edges are learned using an SSVM (structural support vector machine), as opposed to being determined empirically. Trajectories are established as a result of determining higher order dependencies by rearranging the edges of the hypergraph so as to conform to several constraints and affinity criteria. While demonstrating robustness to affine transforms and noise, such methods still cannot handle complex crowded scenes with multiple occlusions and, compared to previously-mentioned methods, suffer some penalties in terms of performance, since updating the various parameters of hypergraph edges can be computationally costly.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Trajectory Prediction Methods <s> Predicting other traffic participants trajectories is a crucial task for an autonomous vehicle, in order to avoid collisions on its planned trajectory. It is also necessary for many Advanced Driver Assistance Systems, where the ego-vehicle's trajectory has to be predicted too. Even if trajectory prediction is not a deterministic task, it is possible to point out the most likely trajectory. This paper presents a new trajectory prediction method which combines a trajectory prediction based on Constant Yaw Rate and Acceleration motion model and a trajectory prediction based on maneuver recognition. It takes benefit on the accuracy of both predictions respectively a short-term and long-term. The defined Maneuver Recognition Module selects the current maneuver from a predefined set by comparing the center lines of the road's lanes to a local curvilinear model of the path of the vehicle. The overall approach was tested on prerecorded human real driving data and results show that the Maneuver Recognition Module has a high success rate and that the final trajectory prediction has a better accuracy. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Trajectory Prediction Methods <s> This paper describes an integrated Bayesian approach to maneuver-based trajectory prediction and criticality assessment that is not limited to specific driving situations. First, a distribution of high-level driving maneuvers is inferred for each vehicle in the traffic scene via Bayesian inference. For this purpose, the domain is modeled in a Bayesian network with both causal and diagnostic evidences and an additional trash maneuver class, which allows the detection of irrational driving behavior and the seamless application from highly structured to nonstructured environments. Subsequently, maneuver-based probabilistic trajectory prediction models are employed to predict each vehicle's configuration forward in time. Random elements in the designed models consider the uncertainty within the future driving maneuver execution of human drivers. Finally, the criticality time metric time-to-critical-collision-probability (TTCCP) is introduced and estimated via Monte Carlo simulations. The TTCCP is a generalization of the time-to-collision (TTC) in arbitrary uncertain multiobject driving environments and valid for longer prediction horizons. All uncertain predictions of all maneuvers of every vehicle are taken into account. Additionally, the criticality assessment considers arbitrarily shaped static environments, and it is shown how parametric free space (PFS) maps can advantageously be utilized for this purpose. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Trajectory Prediction Methods <s> To safely and efficiently navigate through complex traffic scenarios, autonomous vehicles need to have the ability to predict the future motion of surrounding vehicles. Multiple interacting agents, the multi-modal nature of driver behavior, and the inherent uncertainty involved in the task make motion prediction of surrounding vehicles a challenging problem. In this paper, we present an LSTM model for interaction aware motion prediction of surrounding vehicles on freeways. Our model assigns confidence values to maneuvers being performed by vehicles and outputs a multi-modal distribution over future motion based on them. We compare our approach with the prior art for vehicle motion prediction on the publicly available NGSIM US-101 and I-80 datasets. Our results show an improvement in terms of RMS values of prediction error. We also present an ablative analysis of the components of our proposed model and analyze the predictions made by the model in complex traffic scenarios. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Trajectory Prediction Methods <s> Predicting trajectories of pedestrians is quintessential for autonomous robots which share the same environment with humans. In order to effectively and safely interact with humans, trajectory prediction needs to be both precise and computationally efficient. In this work, we propose a convolutional neural network (CNN) based human trajectory prediction approach. Unlike more recent LSTM-based moles which attend sequentially to each frame, our model supports increased parallelism and effective temporal representation. The proposed compact CNN model is faster than the current approaches yet still yields competitive results. <s> BIB004
|
Autonomous cars need to have the ability to predict the future motion of surrounding vehicles in order to navigate through complex traffic scenarios safely and efficiently. The existence of multiple interacting agents, the multi-modal nature of driver behavior, and the inherent uncertainty involved make motion prediction a challenging problem. An autonomous vehicle deployed in complex traffic needs to balance two factors: the safety of humans in and around it, and efficient motion without stalling traffic. The vehicle should also take the initiative, such as deciding when to change lanes, cross unsignalized intersections, or overtake other vehicles BIB003 . This requires the autonomous car to have some ability to reason about the future state of the environment. Other difficulties come from that requirements that such a system must be sensitive to exceptional, rarely happening situations. It should not only consider physical quantities but also information about the drivers' intentions and, because of the great number of possibilities involved, it should take into account only a reasonable subset of possible future scene evolutions BIB002 . One way to plan a safe maneuver is to understand the intent of other traffic participants, i.e. the combination of discrete high-level behaviors as well as the continuous trajectories describing future motion . Predicting other traffic participants trajectories is a crucial task for an autonomous vehicle, in order to avoid collisions on its planned trajectory. Even if trajectory prediction is not a deterministic task, it is possible to specify the most likely trajectory BIB001 . Certain considerations about vehicle dynamics can provide partial knowledge on the future. For instance, a vehicle moving at a given speed needs a certain time to fully stop and the curvature of its trajectory has to be under a certain value in order to keep stability. On the other hand, even if each driver has its own habits, it is possible to identify some common driving maneuvers based on traffic rules, or to assume that drivers keep some level of comfort while driving BIB001 . In order to effectively and safely interact with humans, trajectory prediction needs to be both precise and computationally efficient BIB004 . A recent white paper states that a solution for the prediction and planning tasks of an autonomous car may consider a combination of the following properties: • Predicting only a short time into the future. The likelihood of an accurate prediction is indirectly related to the time between the current state and the point in time it refers to, i.e. the further the predicted state is in the future, the less likely it is that the prediction is correct; • Relying on physics where possible, using dynamic models of road users that form the basis of motion prediction. A classification of relevant objects is a necessary input to be able to discriminate between various models; • Considering the compliance of other road users with traffic rules to a valid extent. For example, the ego car should cross intersections with green traffic lights without stopping, relying on other road users to follow the rule of stopping at red lights. In addition to this, foreseeable non-compliant behavior to traffic rules, e.g. pedestrians crossing red lights in urban areas, needs to be taken into account, supported by defensive drive planning; • Predicting the situation to further increase the likelihood of road user prediction being correct. For example, the future behavior of other road users when driving in a traffic jam differs greatly to their behavior in flowing traffic. Further, it asserts that the interpretation and prediction system should understand not only the worstcase behavior of other road users (possible vulnerable ones, i.e. who may not obey all traffic rules), but their worst-case reasonable behavior. This allows it to make reasonable and physically possible assumptions about other road users. The automated driving system should make a naturalistic assumption, just as humans do, about the reasonable behavior of others. These assumptions need to be adaptable to local requirements so that they meet locally different "driving cultures".
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Problem Description <s> Predicting other traffic participants trajectories is a crucial task for an autonomous vehicle, in order to avoid collisions on its planned trajectory. It is also necessary for many Advanced Driver Assistance Systems, where the ego-vehicle's trajectory has to be predicted too. Even if trajectory prediction is not a deterministic task, it is possible to point out the most likely trajectory. This paper presents a new trajectory prediction method which combines a trajectory prediction based on Constant Yaw Rate and Acceleration motion model and a trajectory prediction based on maneuver recognition. It takes benefit on the accuracy of both predictions respectively a short-term and long-term. The defined Maneuver Recognition Module selects the current maneuver from a predefined set by comparing the center lines of the road's lanes to a local curvilinear model of the path of the vehicle. The overall approach was tested on prerecorded human real driving data and results show that the Maneuver Recognition Module has a high success rate and that the final trajectory prediction has a better accuracy. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Problem Description <s> We introduce a Deep Stochastic IOC RNN Encoder-decoder framework, DESIRE, for the task of future predictions of multiple interacting agents in dynamic scenes. DESIRE effectively predicts future locations of objects in multiple scenes by 1) accounting for the multi-modal nature of the future prediction (i.e., given the same context, future may vary), 2) foreseeing the potential future outcomes and make a strategic prediction based on that, and 3) reasoning not only from the past motion history, but also from the scene context as well as the interactions among the agents. DESIRE achieves these in a single end-to-end trainable neural network model, while being computationally efficient. The model first obtains a diverse set of hypothetical future prediction samples employing a conditional variational auto-encoder, which are ranked and refined by the following RNN scoring-regression module. Samples are scored by accounting for accumulated future rewards, which enables better long-term strategic decisions similar to IOC frameworks. An RNN scene context fusion module jointly captures past motion histories, the semantic scene context and interactions among multiple agents. A feedback mechanism iterates over the ranking and refinement to further boost the prediction accuracy. We evaluate our model on two publicly available datasets: KITTI and Stanford Drone Dataset. Our experiments show that the proposed model significantly improves the prediction accuracy compared to other baseline methods. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Problem Description <s> Recent algorithmic improvements and hardware breakthroughs resulted in a number of success stories in the field of AI impacting our daily lives. However, despite its ubiquity AI is only just starting to make advances in what may arguably have the largest impact thus far, the nascent field of autonomous driving. In this work we discuss this important topic and address one of crucial aspects of the emerging area, the problem of predicting future state of autonomous vehicle's surrounding necessary for safe and efficient operations. We introduce a deep learning-based approach that takes into account current state of traffic actors and produces rasterized representations of each actor's vicinity. The raster images are then used by deep convolutional models to infer future movement of actors while accounting for inherent uncertainty of the prediction task. Extensive experiments on real-world data strongly suggest benefits of the proposed approach. Moreover, following successful tests the system was deployed to a fleet of autonomous vehicles. <s> BIB003
|
To tackle the trajectory prediction task, one should assume to have access to real-time data streams coming from sensors such as lidar, radar or camera, installed aboard the self-driving vehicle and that there already exists a functioning tracking system that allows detection and tracking of traffic actors in real-time. Examples of pieces of information that describe an actor are: bounding box, position, velocity, acceleration, heading, and heading change rate. It may also be needed to have mapping data of the area where the ego car is driving, i.e. road and crosswalk locations, lane directions, and other relevant map information. Past and future positions are represented in an ego car-centric coordinate system. Also, one needs to model the static context with road and crosswalk polygons, as well as lane directions and boundaries: road polygons describe drivable surface, lanes describe the driving path, and crosswalk polygons describe the road surface used for pedestrian crossing BIB003 . An example of available information on which the prediction module can operate is presented in Figure 7 . More formally, considering the future as a consequence of a series of past events, a prediction entails reasoning about probable outcomes based on past observations BIB002 . Let X i t be a vector with the spatial coordinates of actor i at observation time t, with t ∈ {1, 2, ..., T obs }, where T obs is the present time step in the series of observations. The past trajectory of actor i is a sequence }. Based on the past trajectories of all actors, one needs to estimate the future trajectories of all actors, i.e. It is also possible to first generate the trajectories in the Frenet frame along the current lane of the vehicle, then convert it to the initial Cartesian coordinate system BIB001 . The Frenet coordinate system is useful to simplify the motion equations when cars travel on curved roads. It consists of longitudinal and lateral axes, denoted as s and d, respectively. The curve that goes through the center of the road determines the s axis and indicates how far along the car is on the road. The d axis indicates the lateral displacement of the car. d is 0 on the center of the road and its absolute value increases with the distance from the center. Also, it can be positive or negative, depending on the side of the road.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Classification of Methods <s> In this work, a framework for motion prediction of vehicles and safety assessment of traffic scenes is presented. The developed framework can be used for driver assistant systems as well as for autonomous driving applications. In order to assess the safety of the future trajectories of the vehicle, these systems require a prediction of the future motion of all traffic participants. As the traffic participants have a mutual influence on each other, the interaction of them is explicitly considered in this framework, which is inspired by an optimization problem. Taking the mutual influence of traffic participants into account, this framework differs from the existing approaches which consider the interaction only insufficiently, suffering reliability in real traffic scenes. For motion prediction, the collision probability of a vehicle performing a certain maneuver, is computed. Based on the safety evaluation and the assumption that drivers avoid collisions, the prediction is realized. Simulation scenarios and real-world results show the functionality. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Classification of Methods <s> With the objective to improve road safety, the automotive industry is moving toward more “intelligent” vehicles. One of the major challenges is to detect dangerous situations and react accordingly in order to avoid or mitigate accidents. This requires predicting the likely evolution of the current traffic situation, and assessing how dangerous that future situation might be. This paper is a survey of existing methods for motion prediction and risk assessment for intelligent vehicles. The proposed classification is based on the semantics used to define motion and risk. We point out the tradeoff between model completeness and real-time constraints, and the fact that the choice of a risk assessment method is influenced by the selected motion model. <s> BIB002
|
There are several classification approaches presented in the literature regarding trajectory planning methods. An online tutorial distinguishes the following categories: 1. Model-based approaches. They identify common behaviors of the vehicle, e.g. changing lane, turning left, turning right, determining maximum turning speed, etc. A model is created for each possible trajectory the vehicle can go and then probabilities are computed for all these models. One of the simplest approaches to compute the probabilities is the autonomous multiple modal (AMM) algorithm. First, the states of the vehicle at times t − 1 and t are observed. Then the process model is computed at time t − 1 resulting in the expected states for time t. Then the likelihood of the expected state with the observed state is compared, and the probability of the model at time t is computed. Finally, the model with the highest probability is selected; 2. Data-driven approaches. In these approaches a black box model (usually a neural network) is trained using a large quantity of training data. After training, the model will be applied to the observed behavior in order to provide the prediction. The training of the model Figure 8 : Classification of motion models BIB002 is usually computationally expensive and is made offline. On the other hand, the prediction of the trajectories, once the model is trained, is quite fast and can be made online, i.e. in real-time. Some of these methods also employ unsupervised clustering of trajectories using e.g. spectral clustering or agglomerative clustering, and define a trajectory pattern for each cluster. In the prediction stage, the vehicle partial trajectory is observed, it is compared with the prototype trajectories, and then the trajectory most similar to a prototype is predicted. A survey BIB002 proposes a different classification based on three increasingly abstract levels, summarized in Figure 8 . 1. Physics-based motion models. They represent vehicles as dynamic entities governed by the laws of physics. Future motion is predicted using dynamic and kinematic models linking some control inputs (e.g. steering, acceleration), car properties (e.g. weight) and external conditions (e.g. friction coefficient of the road surface) to the evolution of the state of the vehicle (e.g. position, heading, speed). Advantages. Such models are very often used for trajectory prediction and collision risk estimation in the context of road safety. They are more or less complex depending on how fine-grained the representation of the dynamics and kinematics of the vehicle is, how uncertainties are handled, whether or not the geometry of the road is taken into account, etc. Disadvantages. Since they only rely on the low level properties of motion, physics-based motion models are limited to short-term (e.g., less than a second) motion prediction. Typically, they are unable to anticipate any change in the motion of the car caused by the execution of a particular maneuver (e.g., slowing down, turning at constant speed, then accelerating to make a turn at an intersection) or changes caused by external factors (e.g., slowing down because of a vehicle in front); 2. Maneuver-based motion models. They represent vehicles as independent maneuvering entities, i.e. they assume that the motion of a vehicle on the road network corresponds to a series of maneuvers executed independently from the other vehicles. Trajectory prediction is based on the early recognition of the maneuvers that drivers intend to perform. If one can identify the maneuver intention of a driver, one can assume that the future motion of the vehicle will match that maneuver. Advantages. Because of the a priori information, the derived trajectories are more relevant and reliable in the long term than the ones derived from physics-based motion models. Maneuver-based motion models are based either on prototype trajectories or on maneuver intention estimation. Disadvantages. In practice, the assumption that vehicles move independently from each other does not hold. Vehicles share the road with others, and the maneuvers performed by one vehicle necessarily influences the maneuvers of others. Inter-vehicle dependencies are particularly strong at road inter-sections, where priority rules force vehicles to take into account the maneuvers performed by the others. Disregarding these dependencies can lead to erroneous interpretations of the situations and to poor evaluations of the risk; 3. Interaction-aware motion models. They represent vehicles as maneuvering entities which interact with one another, i.e. the motion of a vehicle is assumed to be influenced by the motion of the other vehicles in the scene. Advantages. Taking into account the dependencies between the vehicles leads to a better interpretation of their motion compared to the maneuver-based motion models. As a result, they contribute to a better understanding of the situation and a more reliable evaluation of the risk. They are based either on prototype trajectories or on dynamic Bayesian networks. The interaction-aware motion models are the most comprehensive models proposed so far. They allow longer-term predictions compared to physics-based motion models, and are more reliable than maneuver-based motion models since they account for the dependencies between the vehicles. Disadvantages. Computing all the potential trajectories of the vehicles exhaustively is computationally expensive and may not be compatible with real-time usage. A classification somewhat similar with the previous two is mentioned in BIB001 , which distinguishes the following motion prediction categories of methods: 1. Learning-based motion prediction: learning from the observation of the past movements of vehicles in order to predict the future motion; 2. Model-based motion prediction: using motion models; 3. Motion prediction with a cognitive architecture: trying to reproduce human behavior. Overall, the main difficulty faced by these approaches is that in order to reliably estimate the risk of a traffic situation it is necessary to reason at a high level about a set of interacting maneuvering entities, taking into account uncertainties associated with the data and the models. This high-level reasoning is computationally expensive, and not always compatible with real-time risk estimation. For this reason, a lot of effort has been put recently into designing novel, more efficient risk estimation algorithms which do not need to predict all the possible future trajectories of all the vehicles in the scene and check for collisions. Instead, algorithms have been proposed which focus on the most relevant trajectories to speed up the computation, or to use alternative risk indicators such as conflicts between maneuver intentions. The choice of a risk assessment method is tightly coupled with the choice of a motion model. Therefore, the authors of BIB002 believe that major improvements in this field will be brought by approaches which jointly address vehicle motion modeling and risk estimation. In the rest of this section, we present some specific approaches classified by their main prediction "paradigm", namely neural networks and other methods, most of which use some kind of stochastic representation of the actors' behavior in the environment. This is especially useful since some works use the same model to address different abstraction levels of the trajectory prediction task.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Other Techniques <s> Predicting other traffic participants trajectories is a crucial task for an autonomous vehicle, in order to avoid collisions on its planned trajectory. It is also necessary for many Advanced Driver Assistance Systems, where the ego-vehicle's trajectory has to be predicted too. Even if trajectory prediction is not a deterministic task, it is possible to point out the most likely trajectory. This paper presents a new trajectory prediction method which combines a trajectory prediction based on Constant Yaw Rate and Acceleration motion model and a trajectory prediction based on maneuver recognition. It takes benefit on the accuracy of both predictions respectively a short-term and long-term. The defined Maneuver Recognition Module selects the current maneuver from a predefined set by comparing the center lines of the road's lanes to a local curvilinear model of the path of the vehicle. The overall approach was tested on prerecorded human real driving data and results show that the Maneuver Recognition Module has a high success rate and that the final trajectory prediction has a better accuracy. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Other Techniques <s> In this work, a framework for motion prediction of vehicles and safety assessment of traffic scenes is presented. The developed framework can be used for driver assistant systems as well as for autonomous driving applications. In order to assess the safety of the future trajectories of the vehicle, these systems require a prediction of the future motion of all traffic participants. As the traffic participants have a mutual influence on each other, the interaction of them is explicitly considered in this framework, which is inspired by an optimization problem. Taking the mutual influence of traffic participants into account, this framework differs from the existing approaches which consider the interaction only insufficiently, suffering reliability in real traffic scenes. For motion prediction, the collision probability of a vehicle performing a certain maneuver, is computed. Based on the safety evaluation and the assumption that drivers avoid collisions, the prediction is realized. Simulation scenarios and real-world results show the functionality. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Other Techniques <s> This paper describes an integrated Bayesian approach to maneuver-based trajectory prediction and criticality assessment that is not limited to specific driving situations. First, a distribution of high-level driving maneuvers is inferred for each vehicle in the traffic scene via Bayesian inference. For this purpose, the domain is modeled in a Bayesian network with both causal and diagnostic evidences and an additional trash maneuver class, which allows the detection of irrational driving behavior and the seamless application from highly structured to nonstructured environments. Subsequently, maneuver-based probabilistic trajectory prediction models are employed to predict each vehicle's configuration forward in time. Random elements in the designed models consider the uncertainty within the future driving maneuver execution of human drivers. Finally, the criticality time metric time-to-critical-collision-probability (TTCCP) is introduced and estimated via Monte Carlo simulations. The TTCCP is a generalization of the time-to-collision (TTC) in arbitrary uncertain multiobject driving environments and valid for longer prediction horizons. All uncertain predictions of all maneuvers of every vehicle are taken into account. Additionally, the criticality assessment considers arbitrarily shaped static environments, and it is shown how parametric free space (PFS) maps can advantageously be utilized for this purpose. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Other Techniques <s> Long-term prediction of traffic participants is crucial to enable autonomous driving on public roads. The quality of the prediction directly affects the frequency of trajectory planning. With a poor estimation of the future development, more computational effort has to be put in re-planning, and a safe vehicle state at the end of the planning horizon is not guaranteed. A holistic probabilistic prediction, considering inputs, results and parameters as random variables, highly reduces the problem. A time frame of several seconds requires a probabilistic description of the scene evolution, where uncertainty or accuracy is represented by the trajectory distribution. Following this strategy, a novel evaluation method is needed, coping with the fact, that the future evolution of a scene is also uncertain. We present a method to evaluate the probabilistic prediction of real traffic scenes with varying start conditions. The proposed prediction is based on a particle filter, estimating behavior describing parameters of a microscopic traffic model. Experiments on real traffic data with random leading vehicles show the applicability in terms of convergence, enabling long-term prediction using forward propagation. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Other Techniques <s> When driving in urban environments, an autonomous vehicle must account for the interaction with other traffic participants. It must reason about their future behavior, how its actions affect their future behavior, and potentially consider multiple motion hypothesis. In this paper we introduce a method for joint behavior estimation and trajectory planning that models interaction and multi-policy decision-making. The method leverages Partially Observable Markov Decision Processes to estimate the behavior of other traffic participants given the planned trajectory for the ego-vehicle, and Receding-Horizon Control for generating safe trajectories for the ego-vehicle. To achieve safe navigation we introduce chance constraints over multiple motion policies in the receding-horizon planner. These constraints account for uncertainty over the behavior of other traffic participants. The method is capable of running in real-time and we show its performance and good scalability in simulated multi-vehicle intersection scenarios. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Other Techniques <s> We present a simple yet effective paradigm to accurately predict the future trajectories of observed vehicles in dense city environments. We equipped a large fleet of cars with cameras and performed city-scale structure-from-motion to accurately reconstruct 10M positions of their trajectories spanning over 1000h of driving.We demonstrate that this information can be used as a powerful high-fidelity prior to predict future trajectories of newly observed vehicles in the area without the need for any knowledge of road infrastructure or vehicle motion models. By relating the current position of the observed car to a large dataset of the previously exhibited motion in the area we can directly perform prediction of its future position.We evaluate our method on two large-scale data sets from San Francisco and New York City and demonstrate an order of magnitude improvement compared to a linear-motion based method. We also demonstrate that the performance naturally improves with the amount of data and ultimately yields a system that can accurately predict vehicle motion in challenging situations across extremes in traffic, time, and weather conditions. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Other Techniques <s> Reliable prediction of surround vehicle motion is a critical requirement for path planning for autonomous vehicles. In this paper, we propose a unified framework for surround vehicle maneuver classification and motion prediction that exploits multiple cues, namely, the estimated motion of vehicles, an understanding of typical motion patterns of freeway traffic and intervehicle interaction. We report our results in terms of maneuver classification accuracy and mean and median absolute error of predicted trajectories against the ground truth for real traffic data collected using vehicle mounted sensors on freeways. An ablative analysis is performed to analyze the relative importance of each cue for trajectory prediction. Additionally, an analysis of execution time for the components of the framework is presented. Finally, we present multiple case studies analyzing the outputs of our model for complex traffic scenarios. <s> BIB007
|
The authors of BIB005 use Partially Observable Markov Decision Processes (POMDPs) for behavior prediction and nonlinear receding horizon control (or model predictive control) for trajectory planning. The POMDPs model the interactions between the ego vehicle and the obstacles. The action space is discretized into: acceleration, deceleration and maintaining the current speed. For each of the obstacle vehicles, three types of intentions are considered: going straight, turning and stopping. The reward function is chosen so that the actors make the maximum progress on the road while avoiding collisions. A particle filter is implemented to update the belief of each motion intention for each obstacle vehicle. For the ego car, the bicycle kinematic model is used to update the state. Article BIB006 presents a simple yet effective way to accurately predict the future trajectories of observed vehicles in dense city environments. The authors recorded the trajectories of cars comprising over 1000 hours of driving in San Francisco and New York. By relating the current position of an observed car to this large dataset of previously exhibited motion in the same area, the prediction of its future position can be directly performed. Under the hypothesis that the car follows the same trajectory pattern as one of the cars in the past at the same location had followed. This nonparametric method improves over time as the amount of samples increases and avoids the need for more complex models. Paper BIB001 presents a trajectory prediction method which combines the constant yaw rate and acceleration (CYRA) motion model and maneuver recognition. The maneuver recognition module selects the current maneuver from a predefined set (e.g. keep lane, change lane to the right or to the left and turn at an intersection) by comparing the center lines of the road lanes to a local curvilinear model of the path of the vehicle. The proposed method combines the short-term accuracy of the former technique and the longer-term accuracy of the latter. The authors use mathematical models that take into account the position, speed and acceleration of vehicles. In BIB004 , a method is presented that evaluates the probabilistic prediction of real traffic scenes with varying start conditions. The prediction is based on a particle filter, which estimates the behaviordescribing parameters of a microscopic traffic model, i.e. the driving style as a distribution of behavior parameters. This method seems to be applicable for long-term trajectory planning. The driving style parameters of the intelligent driving model (IDM) are continuously estimated, together with the relative motion between objects. By measuring vehicle accelerations, a driving style estimation can be provided from the first detection without the need of a long observation time before performing the prediction. The use of a particle filter enables to cope with continuous behavior changes with arbitrarily shaped parameter distributions. Forward propagation using Monte Carlo simulation provides an approximate probability density function of the future scene. Since Markov models are only conditioned on the last observed position, they can generate poor predictions if different motion patterns exhibit significantly overlapping segments. Moreover, trajectories acquired from sensors can be fragmented by occlusion. The approaches based on Gaussian Processes (GPs) overcome this problem by modeling motion patterns as velocity flow fields, thus avoiding the need to identify goal positions. Also, they are well-suited for applications with noisy measurements, such as data collected on moving cars. More importantly, predictions using a GP have a simple analytical form that can be easily integrated into a risk-aware path planner. Article develops a data-driven approach for learning a mobile agent's motion patterns from past observations, which are subsequently used for online trajectory predictions. It examines the reasons why previous GP-based mixture models can sometimes produce poor prediction results by providing examples to show that while GP is a flexible tool for modeling motion patterns, GP likelihood is not a good similarity measure for trajectory clustering. As the traffic participants have a mutual influence on one another, their interaction is explicitly considered in BIB002 , which is inspired by an optimization problem. For motion prediction, the collision probability of a vehicle performing a certain maneuver is computed. The prediction is performed based on the safety evaluation and the assumption that drivers avoid collisions. This combination of the intention of each driver and the driver's local risk assessment to perform a maneuver leads to an interaction-aware motion prediction. The authors compute the probability that a collision will occur anywhere in the whole scene, considering that the number of different maneuvers is limited (e.g., lane changes, acceleration, maintaining the speed, deceleration, and combinations), and then the proposed system assesses the danger of possible future trajectories. The same concept of considering risk is used in BIB003 , which describes an integrated Bayesian approach to maneuver-based trajectory prediction and criticality assessment that is not limited to specific driving situations. First, a distribution of high-level driving maneuvers is inferred for each vehicle in the traffic scene by means of Bayesian inference. For this purpose, the domain is modeled with a Bayesian network. Subsequently, maneuver-based probabilistic trajectory prediction models are employed to predict the configuration of each vehicle forward in time. The proposed system has three main parts: the maneuver detection, the prediction, and the criticality assessment. In the maneuver detection part, the current driving maneuver of every vehicle is estimated via Bayesian inference. In the prediction part, maneuver-specific prediction models are employed to predict the configuration of each vehicle forward in time within a common global coordinate system. In the criticality assessment part, these individual joint distributions are used together with a parametric free space map-based representation of the static environment with probability distribution functions to estimate the collision probability of the event that the ego vehicle collides with at least one other vehicle or the static driving environment at least once within the prediction horizon via Monte Carlo simulation. The authors of BIB007 propose a framework for holistic surround vehicle trajectory prediction with three interacting modules: a trajectory prediction module, based on the combination of an interaction model based on motion and maneuver specific variational Gaussian mixture models, a maneuver recognition module based on hidden Markov models for assigning confidence values for maneuvers being performed by surrounding vehicles, and a vehicle interaction module that considers the global context of surrounding vehicles and assigns final predictions by minimizing an energy function based on outputs of the other two modules. The motion model becomes unreliable for long-term trajectory prediction, especially in cases involving a greater degree of decision making by drivers. The paper defines ten maneuver classes for surrounding vehicle motion on freeways in the frame of reference of the ego vehicle, based on combinations of lane passes, overtakes, cut-ins and drift into ego lane. A corresponding energy minimization problem is set so that the predictions where at any point in the time horizon, two vehicles are very close to each other, are penalized. This is based on the fact that drivers tend to follow paths with low probability of collision with other vehicles.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Decision Making Methods <s> In this review, we provide an overview of emerging trends and challenges in the field of intelligent and autonomous, or self-driving, vehicles. Recent advances in the field of perception, planning, and decision-making for autonomous vehicles have led to great improvements in functional capabilities, with several prototypes already driving on our roads and streets. Yet challenges remain regarding guaranteed performance and safety under all driving circumstances. For instance, planning methods that provide safe and system-compliant performance in complex, cluttered environments while modeling the uncertain interaction with other traffic participants are required. Furthermore, new paradigms, such as interactive planning and end-to-end learning, open up questions regarding safety and reliability that need to be addressed. In this survey, we emphasize recent approaches for integrated perception and planning and for behavior-aware planning, many of which rely on machine learning. This raises the question of ver... <s> BIB001
|
Since an agent's actions depend on the other agents' actions, an uncertainty explosion in future states may arise and this may result in the freezing-robot problem, where a robot comes to a complete stop because all possible actions become unacceptably unsafe. If the robot does not come to a complete stop, it may choose to follow highly evasive or arbitrary paths through the problem space, which are often not only suboptimal but potentially dangerous BIB001 . While modeling interactions is an intriguing problem in itself, dealing with the increased complexity is another challenge. Since all agents' actions are affected and equally affect other agents' actions, the number of interactions (and therefore the planning complexity) grows exponentially with the number of agents. The simplest approach is to discretize the action space by motion primitives and to exhaustively search through all possible options. Naturally, there are more efficient methods of exploring the optimization space. In the deterministic case, one can cover the decision-making process, often phrased in a game-theoretic setting, in a tree-type structure and apply a search over the tree. The tree, usually discretized by action time, consists of discrete actions that each agent can choose to execute at each stage. Since each agent's reward depends not only on its own reward and actions but also on all other agents' actions at the previous stages, the tree also grows exponentially with the number of agents BIB001 . In the previous section, we have presented prediction methods for the trajectories of surrounding vehicles. An important issue is related to the decisions of the ego car itself regarding the possible maneuvers that it can make in order to optimize some criteria related to risk and efficiency. In this section, we briefly present some methods that can be used for this purpose. We especially focus on (deep) reinforcement learning (RL) and tree search algorithms.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Deep Reinforcement Learning Algorithms <s> In this article, we propose and analyze a class of actor-critic algorithms. These are two-time-scale algorithms in which the critic uses temporal difference learning with a linearly parameterized approximation architecture, and the actor is updated in an approximate gradient direction, based on information provided by the critic. We show that the features for the critic should ideally span a subspace prescribed by the choice of parameterization of the actor. We study actor-critic algorithms for Markov decision processes with Polish state and action spaces. We state and prove two results regarding their convergence. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Deep Reinforcement Learning Algorithms <s> Policy-gradient-based actor-critic algorithms are amongst the most popular algorithms in the reinforcement learning framework. Their advantage of being able to search for optimal policies using low-variance gradient estimates has made them useful in several real-life applications, such as robotics, power control, and finance. Although general surveys on reinforcement learning techniques already exist, no survey is specifically dedicated to actor-critic algorithms in particular. This paper, therefore, describes the state of the art of actor-critic algorithms, with a focus on methods that can work in an online setting and use function approximation in order to deal with continuous state and action spaces. After starting with a discussion on the concepts of reinforcement learning and the origins of actor-critic algorithms, this paper describes the workings of the natural gradient, which has made its way into many actor-critic algorithms over the past few years. A review of several standard and natural actor-critic algorithms is given, and the paper concludes with an overview of application areas and a discussion on open issues. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Deep Reinforcement Learning Algorithms <s> Direct policy search can effectively scale to high-dimensional systems, but complex policies with hundreds of parameters often present a challenge for such methods, requiring numerous samples and often falling into poor local optima. We present a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima. We show how differential dynamic programming can be used to generate suitable guiding samples, and describe a regularized importance sampled policy optimization that incorporates these samples into the policy search. We evaluate the method by learning neural network controllers for planar swimming, hopping, and walking, as well as simulated 3D humanoid running. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Deep Reinforcement Learning Algorithms <s> The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Deep Reinforcement Learning Algorithms <s> In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Deep Reinforcement Learning Algorithms <s> We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Deep Reinforcement Learning Algorithms <s> We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input. <s> BIB007 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Deep Reinforcement Learning Algorithms <s> One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract model, represented by a Markov reward process, that can be rolled forward multiple "imagined" planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths. The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function. We applied the predictron to procedurally generated random mazes and a simulator for the game of pool. The predictron yielded significantly more accurate predictions than conventional deep neural network architectures. <s> BIB008 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Deep Reinforcement Learning Algorithms <s> We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time. <s> BIB009 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Deep Reinforcement Learning Algorithms <s> We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines. <s> BIB010
|
Recently, there have been many efforts in devising better, more efficient RL algorithms. A very popular class of applications is represented by games, where the task is to learn to play directly from the game image and perhaps the score, without any a priori knowledge about the game rules. Of course, the same algorithms can be applied to other classes of problems, including decision making in autonomous driving. Below, we present some of these RL algorithms , : • Policy Gradients BIB002 . The objective of an RL agent is to maximize the expected total discounted reward, i.e. value or utility, by following a policy. The policy returns the action that the agent should take in each state. This is usually a maximization problem (finding the best action in every state) and the maximum function is not differentiable, so gradientbased methods cannot be used. However, one can use a parametric representation for the policy, e.g., a neural network that gives the probabilities of each action for each state using the softmax function. Softmax is differentiable, therefore gradients can be used to adjust the parameters of the neural network which, in turn, approximates the policy; • Deep Q-Network (DQN) BIB004 . It approximates the Q matrix of values computed, e.g., by the classic Q-Learning algorithm with a neural network. A great advantage is that each step of experience is likely used in many weight updates, which allows better generalization to unvisited states. However, it was found that learning directly from successive samples is suboptimal because of the correlations between the samples. Instead, the algorithm learns using experience replay, i.e. the updates are made using random samples from a buffer of past transitions. Also, in order to stabilize learning, the target network is kept fixed for a certain number of learning episodes, and then replaced by the current network; • Actor-Critic BIB001 . These methods are temporal difference (TD) methods that have a separate memory structure to explicitly represent the policy independently of the value function. The policy structure is known as the "actor", because it is used to select actions, and the estimated value function is known as the "critic", because it criticizes the actions made by the actor. Learning is on-policy: the critic learns about and critiques, in the form of a TD error, the policy followed by the actor. This scalar signal is the only output of the critic and drives all learning in both actor and critic; • Asynchronous Advantage Actor-Critic (A3C) BIB007 . In A3C there is a global network and multiple worker agents each with its own network. Each of these agents interacts with its own copy of the environment at the same time. In this way, the experience of each agent is independent of the experience of the others and thus the overall experience available for training becomes more diverse. Instead of discounted rewards, the method uses another value called "advantage", which allows the agent to determine not just how good its actions were, but how much better they turned out to be than expected. The advantage is positive if an action is better than the other actions possible in that state; • Proximal Policy Optimization BIB009 . It improves the stability of the actor training by limiting the policy update at each training step. Thus, it avoids having too large policy updates. The ratio that represents the difference between the new and the old policy is clipped (e.g., between 0.8 and 1.2), ensuring that the policy updates are not too large; • Trust Region Policy Optimization (TRPO) BIB005 . Policy Gradients computes the steepest ascent direction for the rewards and updates the policy towards that direction. However, this method uses the first-order derivative and approximates the surface to be flat. If the surface has high curvature and the step size (the learning rate) is too large, it can lead to very bad policies. On the other hand, if the step is too small, the model learns too slowly. TRPO limits the parameter changes that are sensitive to the cost surface and ensures that any policy change should guarantee an improvement in rewards. In the trust region, one determines the maximum step size that is used for exploration and locates the optimal point within this trust region. If the divergence between the new and the old policy is getting large, the trust region is shrunk; otherwise, it is expanded; • Imagination-Augmented Agent BIB010 . The idea of this algorithm is to allow the agent to imagine future trajectories and incorporate these imagined paths into its decision process. They consist of a set of trajectories "imagined" from the current observation. The trajectories are called "rollouts" and are produced for every available action in the environment. Every rollout consists of a fixed number of steps into the future and every step in a special model called the "environment model" produces the next observation and predicts the immediate reward from the current observation and the action to be taken. There are several papers that explore different variants of these algorithms. The authors of BIB007 propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent. They present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training, allowing all four methods to successfully train neural network controllers. Instead of experience replay, multiple agents are executed in parallel, on multiple instances of the environment. This parallelism also decorrelates the agents' data into a more stationary process. The experiments are run on a single machine with a standard multi-core CPU. The best of the proposed methods is reported to be the A3C. Article introduces a hybrid CPU/GPU version of the A3C algorithm and concentrates on aspects critical to leveraging the computational power of the GPU. It introduces a system of queues and a dynamic scheduling strategy and achieves a significant speed up with respect to its CPU equivalent. The authors of BIB006 adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. They present an actor-critic, model-free, off-policy (i.e. the network is trained offpolicy with samples from a replay buffer) algorithm based on the deterministic policy gradient that can operate over continuous action spaces. The actor-critic approach is combined with insights from DQN. The resulting model seems to be able to learn competitive policies using low-dimensional observations, e.g. Cartesian coordinates or joint angles. A key feature of the approach is its simplicity: it requires only a straightforward actor-critic architecture and learning algorithm with very few adjustable parameters. Its main disadvantage is that it requires a large number of training episodes to find solutions. In , proximal gradient temporal difference learning is introduced, which provides a principled way of designing and analyzing true stochastic gradient temporal difference learning algorithms. The authors show how gradient temporal difference (GTD) reinforcement learning methods can be formally derived, not by starting from their original objective functions, as previously attempted, but rather from a primal-dual saddle-point objective function. Both error bound and performance bound are provided, which shows that the value function approximation bound of the GTD algorithms family is O (d/ 4 √ n), where d is the dimension of the feature vector and n is the number of samples. Direct policy search can effectively scale to high-dimensional systems, but complex policies with hundreds of parameters often present a challenge for such methods, requiring numerous samples and often falling into poor local optima. Article BIB003 presents a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima. It shows how differential dynamic programming can be used to generate suitable guiding samples, and describes a regularized importance sampled policy optimization that incorporates these guiding samples into the policy search. As a consequence, the algorithm can learn complex policies with hundreds of parameters. Another interesting algorithm is the Predictron BIB008 . This architecture is an abstract model, represented by a Markov reward process, that can be rolled forward multiple "imagined" planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths. The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function. It is reported to demonstrate more accurate predictions than conventional deep neural network architectures. The predictron is composed of four main components. First, a state representation that encodes raw input (e.g., a history of observations, in the partially observed setting) into an internal (abstract, hidden) state. Second, a model that maps from an internal state to a subsequent internal state, internal reward, and internal discount. Third, a value function that outputs internal values representing the future, internal return from the internal state onwards. The predictron is applied by unrolling its model multiple "planning" steps to produce internal rewards, discounts and values. Finally, these internal rewards, discounts and values are combined together by an accumulator into an overall estimate of value. Unlike most approaches to model-based RL, the model is fully abstract: it does not have to correspond to the real environment in any human understandable fashion, as long as its rolled-forward "plans" accurately predict the outcomes in the true environment.
|
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Tree Search Algorithms <s> The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Tree Search Algorithms <s> Planning problems are among the most important and well-studied problems in artificial intelligence. They are most typically solved by tree search algorithms that simulate ahead into the future, evaluate future states, and back-up those evaluations to the root of a search tree. Among these algorithms, Monte-Carlo tree search (MCTS) is one of the most general, powerful and widely used. A typical implementation of MCTS uses cleverly designed rules, optimised to the particular characteristics of the domain. These rules control where the simulation traverses, what to evaluate in the states that are reached, and how to back-up those evaluations. In this paper we instead learn where, what and how to search. Our architecture, which we call an MCTSnet, incorporates simulation-based search inside a neural network, by expanding, evaluating and backing-up a vector embedding. The parameters of the network are trained end-to-end using gradient-based optimisation. When applied to small searches in the well-known planning problem Sokoban, the learned search algorithm significantly outperformed MCTS baselines. <s> BIB002
|
Planning problems are often solved by tree search algorithms that simulate ahead into the future, evaluate future states, and back-up those evaluations to the root of the search tree. Among these algorithms, Monte Carlo Tree Search (MCTS) BIB001 is one of the most general, powerful and widely used. The typical MCTS algorithm consists of several phases. First, it simulates trajectories into the future, starting from the root state. Second, it evaluates the performance of the leaf states, either using a random rollout, or using an evaluation function such as a "value network". Third, it backs-up these evaluations to update the internal values along the trajectory, for example by averaging over evaluations. The architecture presented in BIB002 , called MCTSnet, incorporates the simulation-based search into a neural network, by expanding, evaluating and backing-up a vector embedding. The parameters of the network are trained end-to-end using gradient-based optimization. The key idea is to represent the internal state of the search, at each node, by a memory vector. The computation of the network proceeds forwards from the root state, just like a simulation of MCTS, using a simulation policy based on the memory vector to select the trajectory to traverse. The leaf state is then processed by an embedding network to initialize the memory vector at the leaf. The network proceeds backwards up the trajectory, updating the memory at each visited state according to a backup network that propagates from child to parent. Finally, the root memory vector is used to compute an overall prediction of value or action. The major benefit of this architecture is that it can be used for gradient-based optimization. Still, internal action sequences directing the control flow of the network cannot be differentiated, and learning this internal policy presents a challenging credit assignment problem. To address this, BIB002 proposes a novel, generally-applicable approximate scheme for credit assignment that leverages the anytime property of the computational graph, allowing to effectively learn this part of the search network from data. Rapidly-exploring random trees (RRTs) represent an efficient method for finding feasible trajectories for high-dimensional non-holonomic systems. They can be viewed as a technique to generate open-loop trajectories for nonlinear systems with state constraints. An RRT can also be considered as a Monte Carlo method to bias search into the largest Voronoi regions of a graph in a configuration space. The tree is constructed incrementally from samples drawn randomly from the search space and is inherently biased to grow towards large unsearched areas of the problem. If the random sample is further from its nearest state in the tree than this limit allows, a new state at the maximum distance from the tree along the line to the random sample is used instead of the random sample itself. The random samples can then be viewed as controlling the direction of the tree growth while the growth factor determines its rate. Article describes a real-time motion planning algorithm, based on RRTs, applicable to autonomous vehicles operating in an urban environment. The extensions to the standard RRT are motivated by the need to generate dynamically feasible plans in real-time, safety requirements, and the constraints dictated by the uncertainty of driving in an urban environment. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete almost 100 km of a simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.
|
A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> Many applications in wireless sensor networks require sensor nodes to obtain their absolute or relative positions. Although various localization algorithms have been proposed recently, most of them require nodes to be equipped with range measurement hardware to obtain distance information. In this paper, an area localization method based on Support Vector Machines (SVM) for mobile nodes in wireless sensor networks is presented. Area localization is introduced as an evaluation metric. The area localization procedure contains two phases. Firstly, the RF-based method is used to determine whether the nodes have moved, which only utilizes the value change of RSSI value rather than range measurement. Secondly, connectivity information and SVM algorithm are used for area localization of mobile nodes. The area localization is introduced to trade off the accuracy and precision. And area localization, as a new metric, is used to evaluate our method. The simulation experiments achieve good results. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> Most literature on short-term traffic flow forecasting focused mainly on normal, or non-incident, conditions and, hence, limited their applicability when traffic flow forecasting is most needed, i.e., incident and atypical conditions. Accurate prediction of short-term traffic flow under atypical conditions, such as vehicular crashes, inclement weather, work zone, and holidays, is crucial to effective and proactive traffic management systems in the context of intelligent transportation systems (ITS) and, more specifically, dynamic traffic assignment (DTA). To this end, this paper presents an application of a supervised statistical learning technique called Online Support Vector machine for Regression, or OL-SVR, for the prediction of short-term freeway traffic flow under both typical and atypical conditions. The OL-SVR model is compared with three well-known prediction models including Gaussian maximum likelihood (GML), Holt exponential smoothing, and artificial neural net models. The resultant performance comparisons suggest that GML, which relies heavily on the recurring characteristics of day-to-day traffic, performs slightly better than other models under typical traffic conditions, as demonstrated by previous studies. Yet OL-SVR is the best performer under non-recurring atypical traffic conditions. It appears that for deployed ITS systems that gear toward timely response to real-world atypical and incident situations, OL-SVR may be a better tool than GML. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> This paper describes an original method for target tracking in wireless sensor networks. The proposed method combines machine learning with a Kalman filter to estimate instantaneous positions of a moving target. The target's accelerations, along with information from the network, are used to obtain an accurate estimation of its position. To this end, radio-fingerprints of received signal strength indicators (RSSIs) are first collected over the surveillance area. The obtained database is then used with machine learning algorithms to compute a model that estimates the position of the target using only RSSI information. This model leads to a first position estimate of the target under investigation. The kernel-based ridge regression and the vector-output regularized least squares are used in the learning process. The Kalman filter is used afterward to combine predictions of the target's positions based on acceleration information with the first estimates, leading to more accurate ones. The performance of the method is studied for different scenarios and a thorough comparison with well-known algorithms is also provided. <s> BIB003 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> Wireless sensor networks (WSNs) monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002–2013 of machine learning methods that were used to address common issues in WSNs. The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges. <s> BIB004 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> With the development of smart devices and cloud computing, more and more public health data can be collected from various sources and can be analyzed in an unprecedented way. The huge social and academic impact of such developments caused a worldwide buzz for big data. In this review article, we summarized the latest applications of Big Data in health sciences, including the recommendation systems in healthcare, Internet-based epidemic surveillance, sensor-based health conditions and food safety monitoring, Genome-Wide Association Studies (GWAS) and expression Quantitative Trait Loci (eQTL), inferring air quality using big data and metabolomics and ionomics for nutritionists. We also reviewed the latest technologies of big data collection, storage, transferring, and the state-of-the-art analytical methods, such as Hadoop distributed file system, MapReduce, recommendation system, deep learning and network Analysis. At last, we discussed the future perspectives of health sciences in the era of Big Data. We explained the steps for Big Data projects: 1. Formulate your question; 2. Find the right ways (smart devices, Internet, hospitals ?) to collect your data; 3. Store the data; 4. Analyze your data; 5. Generate the analysis report with vivid visualization. 6. Evaluate the project: problem solved or start over. The latest applications of Big Data in health sciences were reviewed. The cutting edge computational technologies of big data collection, storage, transferring, and the state-of-the-art analytical methods were introduced. The future perspectives of health sciences in the era of Big Data were discussed. <s> BIB005 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> One of the major applications of wireless sensor networks (WSNs) is the navigation service for emergency evacuation, the goal of which is to assist people in escaping from a hazardous region safely and quickly when an emergency occurs. Most existing solutions focus on finding the safest path for each person, while ignoring possible large detours and congestions caused by plenty of people rushing to the exit. In this paper, we present CANS, a C ongestion- A daptive and small stretch emergency N avigation algorithm with W S Ns. Specifically, CANS leverages the idea of level set method to track the evolution of the exit and the boundary of the hazardous area, so that people nearby the hazardous area achieve a mild congestion at the cost of a slight detour, while people distant from the danger avoid unnecessary detours. CANS also considers the situation in the event of emergency dynamics by incorporating a local yet simple status updating scheme. To the best of our knowledge, CANS is the first WSN-assisted emergency navigation algorithm achieving both mild congestion and small stretch, where all operations are in-situ carried out by cyber-physical interactions among people and sensor nodes. CANS does not require location information, nor the reliance on any particular communication model. It is also distributed and scalable to the size of the network with limited storage on each node. Both experiments and simulations validate the effectiveness and efficiency of CANS. <s> BIB006 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> With the development of network-enabled sensors and artificial intelligence algorithms, various human-centered smart systems are proposed to provide services with higher quality, such as smart healthcare, affective interaction, and autonomous driving. Considering cognitive computing is an indispensable technology to develop these smart systems, this paper proposes human-centered computing assisted by cognitive computing and cloud computing. First, we provide a comprehensive investigation of cognitive computing, including its evolution from knowledge discovery, cognitive science, and big data. Then, the system architecture of cognitive computing is proposed, which consists of three critical technologies, i.e., networking (e.g., Internet of Things), analytics (e.g., reinforcement learning and deep learning), and cloud computing. Finally, it describes the representative applications of human-centered cognitive computing, including robot technology, emotional communication system, and medical cognitive system. <s> BIB007 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> In the era of information, the green services of content-centric IoT are expected to offer users the better satisfaction of Quality of Experience (QoE) than that in a conventional IoT. Nevertheless, the network traffic and new demands from IoT users increase along with the promising of the content-centric computing system. Therefore, the satisfaction of QoE will become the major challenge in the content-centric computing system for IoT users. In this article, to enhance the satisfaction of QoE, we propose QoE models to evaluate the qualities of the IoT concerning both network and users. The value of QoE does not only refer to the network cost, but also the Mean Opinion Score (MOS) of users. Therefore, our models could capture the influence factors from network cost and services for IoT users based on IoT conditions. Specially, we mainly focus on the issues of cache allocation and transmission rate. Under this content-centric IoT, aiming to allocate the cache capacity among content-centric computing nodes and handle the transmission rates under a constrained total network cost and MOS for the whole IoT, we devote our efforts to the following two aspects. First, we formulate the QoE as a green resource allocation problem under the different transmission rate to acquire the best QoE. Then, in the basis of the node centrality, we will propose a suboptimal dynamic approach, which is suitable for IoT with content delivery frequently. Furthermore, we present a green resource allocation algorithm based on Deep Reinforcement Learning (DRL) to improve accuracy of QoE adaptively. Simulation results reveal that our proposals could achieve high QoE performance for content-centric IoT. <s> BIB008 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> Deep learning is a promising approach for extracting accurate information from raw sensor data from IoT devices deployed in complex environments. Because of its multilayer structure, deep learning is also appropriate for the edge computing environment. Therefore, in this article, we first introduce deep learning for IoTs into the edge computing environment. Since existing edge nodes have limited processing capability, we also design a novel offloading strategy to optimize the performance of IoT deep learning applications with edge computing. In the performance evaluation, we test the performance of executing multiple deep learning tasks in an edge computing environment with our strategy. The evaluation results show that our method outperforms other optimization solutions on deep learning for IoT. <s> BIB009 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability. <s> BIB010 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature. <s> BIB011 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> Abstract Smart manufacturing refers to using advanced data analytics to complement physical science for improving system performance and decision making. With the widespread deployment of sensors and Internet of Things, there is an increasing need of handling big manufacturing data characterized by high volume, high velocity, and high variety. Deep learning provides advanced analytics tools for processing and analysing big manufacturing data. This paper presents a comprehensive survey of commonly used deep learning algorithms and discusses their applications toward making manufacturing “smart”. The evolvement of deep learning technologies and their advantages over traditional machine learning are firstly discussed. Subsequently, computational methods based on deep learning are presented specially aim to improve system performance in manufacturing. Several representative deep learning models are comparably discussed. Finally, emerging topics of research on deep learning are highlighted, and future trends and challenges associated with deep learning for smart manufacturing are summarized. <s> BIB012 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> Indoor localization has witnessed a rapid development in the past few decades. Tremendous solutions have been put forwarded in the literature and the localization accuracy has reach an unprecedent centimeter-level. Among the available approaches, acoustic-enabled solutions have attracted much attention. They customarily achieve decimeter-level localization accuracy with affordable infrastructure costs. However, there still exist several open issues for the acoustic-based approaches which prohibit their wide-scale adoptions. First, although extra infrastructures (i.e., beacons) are economical, deployment, and maintenance can incur excessive labor cost. Second, current approaches have much latency to obtain a location fix, making it infeasible for mobile target tracking. Third, the localization performance of current solutions degrades easily by the near-far problem, multipath effect, and device diversity. To address these issues, this paper presents an asynchronous acoustic-based localization system with participatory sensing. We leverage the collaborative efforts of the participatory users who are relatively stationary in indoor environments as virtual anchors (VAs) to eliminate the predeployment and post-maintenance costs incurred in traditional anchor-based solutions. To mitigate the latency to obtain a location fix, we design an orthogonal ranging mechanism to enable concurrent beacon message transmission, which is $2\boldsymbol \times $ faster than previous work in obtaining a location fix. Moreover, we propose a robust method to address the near-far problem and device diversity, and we conquer the multipath problem via a genetic algorithm-based approach. Our VA-based system is self-deployable, cost-effective, and robust to environmental dynamics. We have implemented and evaluated a system prototype, demonstrating a median accuracy of 0.98 m in typical indoor settings. <s> BIB013 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> In recent decades, unmanned aerial vehicles (UAVs, also known as drones) equipped with multiple sensors have been widely utilized in various applications. Nevertheless, constrained by limited battery capacities, the hovering time of UAVs is quite limited, prohibiting them from serving a wide area. To cater with remote sensing applications, people often employ vehicles to transport, launch, and recycle them. The so-called vehicle-drone cooperation (VDC) benefits from both the far driving distance of vehicles and the high mobility of UAVs. Efficient routing and scheduling can greatly reduce time consumption and financial expenses incurred in VDC. However, previous works in vehicle-drone cooperative sensing considered only one drone, thus unable to simultaneously cover multiple targets distributed in an area. Using multiple drones to sense different targets in parallel can significantly promote efficiency and expand service areas. Therefore, we propose a novel problem, referred to as vehicle-assisted multidrone routing and scheduling problem. To tackle the problem, we contribute an efficient algorithm, referred to as vehicle-assisted multi-UAV routing and scheduling algorithm (VURA). In VURA, we maintain and iteratively update a memory containing candidate UAV routes. VURA works by iteratively deriving solutions based on UAV routes picked from the memory. In every iteration, VURA jointly optimizes anchor point selection, path planning, and tour assignment via nested optimization operations. To the best of our knowledge, we are the first to tackle this novel yet challenging problem. Finally, performance evaluation is presented to demonstrate the effectiveness and efficiency of our algorithm when compared with existing solutions. <s> BIB014 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> Abstract The enabling Internet-of-Things, technology has inspired many innovative sensing platforms. One emerging yet powerful IoT sensing platform is the Unmanned Aerial Vehicle (UAV), which is widely deployed in various fields including photography, inspection, and communications. However, due to limited battery capacities, the hovering time of UAVs is still too short, prohibiting them from undertaking long-range sensing tasks. To accomplish such remote applications, a straightforward solution is to utilize vehicles to carry and launch UAVs. Efficient routing and scheduling for UAVs and vehicles can greatly reduce time consumption and financial expenses incurred in UAV inspection. Nevertheless, previous work in vehicle-assisted UAV inspection considered only one UAV, incapable of concurrently serving multiple targets distributed in an area. Employing multiple drones to serve multiple targets in parallel can significantly enhance efficiency and expand service areas. Therefore, in this paper we propose a novel algorithm referred to as joint routing and scheduling algorithm for Vehicle-Assisted Multi-UAV inspection (VAMU), which supports the cooperation of one vehicle and multiple drones for wide area inspection applications. VAMU allows multiple UAVs to be launched and recycled in different locations, minimizing time wastage for both the vehicle and UAVs. Performance evaluation is presented to demonstrate the effectiveness and efficiency of our algorithm when compared with existing solutions. <s> BIB015 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> The explosion of online shopping brings great challenges to traditional logistics industry, where the massive parcels and tight delivery deadline impose a large cost on the delivery process, in particular the last mile parcel delivery. On the other hand, modern cities never lack transportation resources such as the private car trips. Motivated by these observations, we propose a novel and effective last mile parcel delivery mechanism through car trip sharing, to leverage the available private car trips to incidentally deliver parcels during their original trips. To achieve this, the major challenges lie in how to accurately estimate the parcel delivery trip cost and assign proper tasks to suitable car trips to maximize the overall performance. To this end, we develop Car4Pac, an intelligent last mile parcel delivery system to address these challenges. Leveraging the real-world massive car trip trajectories, we first build up a 3D (time-dependent, driver-dependent and vehicle-dependent) landmark graph that accurately predicts the travel time and fuel consumption of each road segment. Our prediction method considers not only traffic conditions of different times, but also driving skills of different people and fuel efficiencies of different vehicles. We then develop a two-stage solution towards the parcel delivery task assignment, which is optimal for one-to-one assignment and yields high-quality results for many-to-one assignment. Our extensive real-world trace driven evaluations further demonstrate the superiority of our Car4Pac solution. <s> BIB016 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> Recent years have witnessed an explosive growth of online shopping, which has posted unprecedented pressure on the logistics industry, especially the last mile parcel delivery. Existing solutions mostly rely on dedicated couriers, which suffer from high cost and low elasticity when dealing with a massive amount of local addresses. Advances in the Internet of Things, however, have enabled vehicle information to be readily accessible anytime anywhere, forming an Internet of Vehicles (IoV), which further enables intelligent vehicle scheduling and management. New opportunities therefore arise toward efficient and elastic last mile delivery for smart cities. In this article, we seek novel solutions to improve the last mile parcel delivery with crowd intelligence. We first review the existing and emerging solutions for last mile parcel delivery. We then discuss the advances of the ride-sharing- based delivery mechanism, identifying the unique opportunities and challenges therein. We further present Car4Pac, an IoV-enabled intelligent ride-sharing-based delivery system for smart cities, and demonstrate its superiority with real trace-driven evaluations. <s> BIB017 </s> A Survey on Deep Learning Empowered IoT Applications <s> I. INTRODUCTION <s> Base stations have been widely deployed to satisfy the service coverage and explosive demand increase in today's cellular networks. Their reliability and availability heavily depend on the electrical power supply. Battery groups are installed as backup power in most of the base stations in case of power outages due to severe weathers or human-driven accidents, particularly in remote areas. The limited numbers and capacities of batteries, however, can hardly sustain a long power outage without a well-designed allocation strategy. As a result, the service interruption occurs along with an increasing maintenance cost. Meanwhile, a deep discharge of a battery in such case can also accelerate the battery degradation and eventually contribute to a higher battery replacement cost. In this paper, we closely examine the base station features and backup battery features from a 1.5-year dataset of a major cellular service provider, including 4,206 base stations distributed across 8,400 square kilometers and more than 1.5 billion records on base stations and battery statuses. Through exploiting the correlations between the battery working conditions and battery statuses, we build up a deep learning based model to estimate the remaining lifetime of backup batteries. We then develop BatAlloc , a battery allocation framework to address the mismatch between the battery supporting ability and diverse power outage incidents. We present an effective solution that minimizes both the service interruption time and the overall cost. Our real trace-driven experiments show that BatAlloc cuts down the average service interruption time from 4.7 hours to nearly zero with only 85 percent of the overall cost compared to the current practical allocation. <s> BIB018
|
The rise of Internet-of-Things (IoT) technology has brought prosperity to a myriad of emerging applications on various mobile and wireless platforms including smart phones BIB013 , sensor networks BIB006 , unmanned aerial vehicles (UAV) BIB014 , BIB015 , cognitive smart systems BIB007 , and so on. To develop effective IoT applications, we may typically follow a work-flow model, which consists of five components: question formulation, data collection, data analysis, visualization, and evaluation BIB005 . Among them, data analysis is a critical and computational intensive part wherein traditional technologies generally combine professional knowledge with machine learning (e.g., logistic regression, support vector machine, and random forest) to figure out classification or regression The associate editor coordinating the review of this manuscript and approving it for publication was Chin-Feng Lai . problems (e.g. the traffic condition prediction with support vector machine (SVM) BIB002 , car tracking with Kalman filter and ridge regression BIB003 , delivery time estimation with Gaussian mixture model (GMM) BIB016 , and localization with SVM BIB001 ). However, as the human society steps into the ''Big Data'' era, such conventional approaches are not sufficiently powerful to process the massive, explosive, and irregular data collected from ubiquitous and heterogeneous IoT data sources. Almost all traditional systems rely on specially designed features, and the performance heavily depends on the prior knowledge of specific fields. Most of learning techniques applied in such systems normally utilize shallow architectures, which have very limited modeling and representational power. As such, a more powerful analytical tool is highly desirable to fully unleash the potentials of the invaluable raw data generated in various IoT applications. VOLUME 7, 2019 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ The recent breakthroughs in deep learning and hardware design have enabled researchers to train much more powerful models, which highly empower many applications such as crowdsourced delivery BIB017 , network caching BIB008 , energy management BIB018 and edge computing BIB009 . In the following we highlight the advantages of deep learning as compared with traditional machine learning methods, which demonstrates the benefits to apply deep learning in IoT applications. • Deep learning incorporates deeper neural network architectures, which is able to extract more complex hidden features (such as temporal and/or spatial dependencies) and characterize more intricate problems. Different from those traditional simple learning methodologies, deep learning has more powerful capabilities in generalizing the complicated relationship of massive raw data in various IoT applications. • Deep learning is able to fully take advantages of the massive yet invaluable data resource. The data processing ability typically depends on the depth and the particular architectures of learning models, such as convolutional architectures; hence, deep learning based models can mostly perform better on large scale data, while simple learning models may be easily over-fitted when dealing with the deluge of data. • Deep learning is a kind of end-to-end learning method that is able to automatically learn how to directly extract effective features from the raw data without the involvement of the time-consuming and laborious hand-crafted feature specification. While a lot of efforts have been made in the past few years, the whole area of leveraging deep learning in IoT applications is still at an infant stage. A few articles which survey the applications of deep learning in IoT domains have been presented in the literature. Alsheikh et al. BIB004 mainly reviewed papers in applying machine learning in wireless sensor networks (WSN). In BIB010 , the authors focused on the survey of applying deep learning techniques for healthcare applications. Another work BIB011 surveys state-of-the-art deep learning methods and their applicability in the IoT applications, with an emphasis on big data and streaming data analytics. The authors in BIB012 present a comprehensive survey of commonly used deep learning algorithms and discuss their applications towards making manufacturing smart. Nevertheless, all these existing survey articles only focus on relatively partial IoT fields. A survey that comprehensively reviews deep learning for a variety of IoT applications is still absent. Therefore, we believe that it is the right time to review the existing literature and to motivate future research directions. To this end, this article summarizes the up-to-date research progresses and trends on leveraging deep learning tools to empower IoT applications. We put emphasis on four representative IoT application scenarios, including smart healthcare, smart home, smart transportation, and smart industry. We aim to reveal how deep leaning can be applied to enhance IoT applications from diverse perspectives. A main thrust on this topic is to seamlessly merge the two disciplines of deep learning and IoT, resulting in a broad spectrum of novel designs in IoT applications, such as health monitoring, disease analysis, indoor localization, intelligent control, home robotics, traffic prediction, traffic monitoring, autonomous driving, manufacture inspection and fault assessment. We also discuss the issues, challenges and future research directions for applying deep learning in IoT applications. All these insights may motivate and inspire further developments in this promising field. The rest of the paper is organized as follows: Section II introduces classic deep learning models employed in the following sections, including Restricted Boltzmann Machines (RBMs), Autoencoder, Convolutional Neural network (CNN), and Recurrent Neural Network (RNN). Section III surveys the latest deep learning based IoT applications in four major application scenarios. Section IV outlines challenges and opportunities for leveraging deep learning in IoT applications. Section V concludes the article.
|
A Survey on Deep Learning Empowered IoT Applications <s> 1) RESTRICTED BOLTZMANN MACHINES <s> We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) RESTRICTED BOLTZMANN MACHINES <s> High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) RESTRICTED BOLTZMANN MACHINES <s> Most of the existing approaches to collaborative filtering cannot handle very large data sets. In this paper we show how a class of two-layer undirected graphical models, called Restricted Boltzmann Machines (RBM's), can be used to model tabular data, such as user's ratings of movies. We present efficient learning and inference procedures for this class of models and demonstrate that RBM's can be successfully applied to the Netflix data set, containing over 100 million user/movie ratings. We also show that RBM's slightly outperform carefully-tuned SVD models. When the predictions of multiple RBM models and multiple SVD models are linearly combined, we achieve an error rate that is well over 6% better than the score of Netflix's own system. <s> BIB003 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) RESTRICTED BOLTZMANN MACHINES <s> Restricted Boltzmann machines (RBMs) are probabilistic graphical models that can be interpreted as stochastic neural networks. The increase in computational power and the development of faster learning algorithms have made them applicable to relevant machine learning problems. They attracted much attention recently after being proposed as building blocks of multi-layer learning systems called deep belief networks. This tutorial introduces RBMs as undirected graphical models. The basic concepts of graphical models are introduced first, however, basic knowledge in statistics is presumed. Different learning algorithms for RBMs are discussed. As most of them are based on Markov chain Monte Carlo (MCMC) methods, an introduction to Markov chains and the required MCMC techniques is provided. <s> BIB004
|
Restricted Boltzmann machines (RBMs) BIB004 are probabilistic graphical models that can be interpreted as stochastic neural networks. RBMs consist of m visible units to represent observable data and n hidden units to capture collections between observed variables, providing us a stochastic representation of the output. Fig. 1 shows a two-level RBM with m visible variables and n hidden variables. RBMs are successful in dimensionality reduction and collaborative filtering BIB003 . A Deep Belief Network (DBN) forms a deep learning model by stacking RBMs BIB001 , which is trained in a layer-by-layer manner using a greedy learning algorithm, and the contrastive divergence (CD) method is applied to update the weights. Neural networks are prone to trap in the local optima of a non-convex function, resulting in poor performance . DBN incorporates both unsupervised pre-training and supervised fine-tuning methods to construct the models: the former intends to learn data distributions with unlabeled data and the latter aims to obtain an optimal solution through fine tuning with labeled data BIB002 .
|
A Survey on Deep Learning Empowered IoT Applications <s> 2) AUTOENCODER <s> Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimization problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) AUTOENCODER <s> Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite. <s> BIB002
|
An autoencoder is a neural network trained to copy its input to its output. Compared to RBMs, an autoencoder consists of three layers including an input layer, a hidden layer, and an output layer. The hidden layer describes a code used to represent the input, and its output is a reconstruction of the input. Basically, the network consists of two major components: an encoder function f which extracts the dependencies of the input, and a decoder g function which produces a reconstruction. Autoencoder is trained by minimizing the error between the input and output. Fig. 2 shows a brief architecture of an autoencoder and a concrete example. Like RBMs, a deep model can be constituted through a stack of autoencoders in a layer-by-layer manner. The hidden layer of a well-trained autoencoder is fed as the input layer of another autoencoder, and iteratively a multi-layers model is formed. The variants of autoencoder include sparse autoencoder BIB001 , denoising autoencoder BIB002 , and contractive autoencoder.
|
A Survey on Deep Learning Empowered IoT Applications <s> 1) CONVOLUTIONAL NEURAL NETWORKS (CNNs) <s> 1. The striate cortex was studied in lightly anaesthetized macaque and spider monkeys by recording extracellularly from single units and stimulating the retinas with spots or patterns of light. Most cells can be categorized as simple, complex, or hypercomplex, with response properties very similar to those previously described in the cat. On the average, however, receptive fields are smaller, and there is a greater sensitivity to changes in stimulus orientation. A small proportion of the cells are colour coded.2. Evidence is presented for at least two independent systems of columns extending vertically from surface to white matter. Columns of the first type contain cells with common receptive-field orientations. They are similar to the orientation columns described in the cat, but are probably smaller in cross-sectional area. In the second system cells are aggregated into columns according to eye preference. The ocular dominance columns are larger than the orientation columns, and the two sets of boundaries seem to be independent.3. There is a tendency for cells to be grouped according to symmetry of responses to movement; in some regions the cells respond equally well to the two opposite directions of movement of a line, but other regions contain a mixture of cells favouring one direction and cells favouring the other.4. A horizontal organization corresponding to the cortical layering can also be discerned. The upper layers (II and the upper two-thirds of III) contain complex and hypercomplex cells, but simple cells are virtually absent. The cells are mostly binocularly driven. Simple cells are found deep in layer III, and in IV A and IV B. In layer IV B they form a large proportion of the population, whereas complex cells are rare. In layers IV A and IV B one finds units lacking orientation specificity; it is not clear whether these are cell bodies or axons of geniculate cells. In layer IV most cells are driven by one eye only; this layer consists of a mosaic with cells of some regions responding to one eye only, those of other regions responding to the other eye. Layers V and VI contain mostly complex and hypercomplex cells, binocularly driven.5. The cortex is seen as a system organized vertically and horizontally in entirely different ways. In the vertical system (in which cells lying along a vertical line in the cortex have common features) stimulus dimensions such as retinal position, line orientation, ocular dominance, and perhaps directionality of movement, are mapped in sets of superimposed but independent mosaics. The horizontal system segregates cells in layers by hierarchical orders, the lowest orders (simple cells monocularly driven) located in and near layer IV, the higher orders in the upper and lower layers. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) CONVOLUTIONAL NEURAL NETWORKS (CNNs) <s> We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) CONVOLUTIONAL NEURAL NETWORKS (CNNs) <s> Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. <s> BIB003
|
CNN is a specialized kind of neural network for processing data that has a known, grid-like topology . CNNs were firstly inspired by a concept called Receptive Field which comes from the study of cat's visual cortex BIB001 . Convolution leverages three important ideas that can help improve a machine learning system: sparse interactions, parameter sharing, and equivariant representations. The basic CNN architecture is made up by one convolutional and pooling layer, optionally followed by a fully connected layer for classification or prediction. In contrast to traditional neural networks, CNN efficiently decreases the number of parameters in nets and the effect of gradient diffusion problem, which means that we can successfully train a deep model containing more than 10 layers using CNNs. For example, AlexNet contains 9 layers, VGGNet [29] contains 11-19 layers, InceptionNet BIB002 from Google contains more than 22 layers, and ResNet BIB003 from Microsoft even contains 152 layers. Fig. 3 shows a general architecture of traditional CNNs called LeNet .
|
A Survey on Deep Learning Empowered IoT Applications <s> 2) RECURRENT NEURAL NETWORKS (RNNs) <s> Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) RECURRENT NEURAL NETWORKS (RNNs) <s> In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) RECURRENT NEURAL NETWORKS (RNNs) <s> In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases. <s> BIB003 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) RECURRENT NEURAL NETWORKS (RNNs) <s> Recurrent Neural Networks are showing much promise in many sub-areas of natural language processing, ranging from document classification to machine translation to automatic question answering. Despite their promise, many recurrent models have to read the whole text word by word, making it slow to handle long documents. For example, it is difficult to use a recurrent network to read a book and answer questions about it. In this paper, we present an approach of reading text while skipping irrelevant information if needed. The underlying model is a recurrent network that learns how far to jump after reading a few words of the input text. We employ a standard policy gradient method to train the model to make discrete jumping decisions. In our benchmarks on four different tasks, including number prediction, sentiment analysis, news article classification and automatic Q\&A, our proposed model, a modified LSTM with jumping, is up to 6 times faster than the standard sequential LSTM, while maintaining the same or even better accuracy. <s> BIB004
|
RNN is a family of neural networks for processing sequential data. RNNs are practical to scale much longer sequences than networks without sequence-based specialization. Many recurrent neural networks apply equation or a similar equation ; θ ) to define the values of their hidden units, illustrated in Fig. 4 . From the network structure, we can observe that RNNs can remember the previous information and utilize it to influence the output of the subsequent nodes. However, RNNs are restricted by looking back only a few steps, due to the gradient diffusion problem and long-term dependencies. To solve these problems, new approaches like LSTM (Long Short-Term Memory) BIB001 and GRU (Gated Recurrent Unit) BIB002 have been proposed, modeling the hidden state to decide what to keep in the previous and current memory. These variants can efficiently capture the long-term dependencies and lead to a stronger capacity to understand the language. Different from CNN which processes spatially continuous data, RNN focuses on the connections between temporally continuous data. Therefore, RNN is mostly employed in the natural language processing (NLP) field BIB004 - BIB003 .
|
A Survey on Deep Learning Empowered IoT Applications <s> 1) HEALTH MONITORING <s> Energy Expenditure (EE) Estimation is an important step in tracking personal activity and preventing chronic diseases such as obesity, diabetes and cardiovascular diseases. Accurate and online EE estimation using small wearable sensors is a difficult task, primarily because most existing schemes work offline or using heuristics. In this work, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs or downstairs) of individuals wearing mobile sensors. We use Convolution Neural Networks (CNNs) to automatically detect important features from data collected from triaxial accelerometer and heart rate sensors. Using CNNs, we find a significant improvement in EE estimation compared to other state-of-the-art models. We compare our results against state-of-the-art Activity-Specific Linear Regression as well as Artificial Neural Networks (ANN) based models. Using a universal CNN model, we obtain an overall low Root Mean Square Error (RMSE) of 1.12 which is 30% and 35% lower than existing models. The results were calibrated against a COSMED K4b2 indirect calorimeter readings. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) HEALTH MONITORING <s> Human activity recognition (HAR) in ubiquitous computing is beginning to adopt deep learning to substitute for well-established analysis techniques that rely on hand-crafted feature extraction and classification techniques. From these isolated applications of custom deep architectures it is, however, difficult to gain an overview of their suitability for problems ranging from the recognition of manipulative gestures to the segmentation and identification of physical activities like running or ascending stairs. In this paper we rigorously explore deep, convolutional, and recurrent approaches across three representative datasets that contain movement data captured with wearable sensors. We describe how to train recurrent approaches in this setting, introduce a novel regularisation approach, and illustrate how they outperform the state-of-the-art on a large benchmark dataset. Across thousands of recognition experiments with randomly sampled model configurations we investigate the suitability of each model for different tasks in HAR, explore the impact of hyperparameters using the fANOVA framework, and provide guidelines for the practitioner who wants to apply deep learning in their problem setting. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) HEALTH MONITORING <s> The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform. <s> BIB003 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) HEALTH MONITORING <s> Deep learning can enable Internet of Things (IoT) devices to interpret unstructured multimedia data and intelligently react to both user and environmental events but has demanding performance and power requirements. The authors explore two ways to successfully integrate deep learning with low-power IoT products. <s> BIB004 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) HEALTH MONITORING <s> With the development of the artificial intelligence (AI), the AI applications have influenced and changed people's daily life greatly. Here, a wearable affective robot that integrates the affective robot, social robot, brain wearable, and wearable 2.0 is proposed for the first time. The proposed wearable affective robot is intended for a wide population, and we believe that it can improve the human health on the spirit level, meeting the fashion requirements at the same time. In this paper, the architecture and design of an innovative wearable affective robot, which is dubbed as Fitbot, are introduced in terms of hardware and algorithm's perspectives. In addition, the important functional component of the robot-brain wearable device is introduced from the aspect of the hardware design, EEG data acquisition and analysis, user behavior perception, and algorithm deployment, etc. Then, the EEG based cognition of user's behavior is realized. Through the continuous acquisition of the in-depth, in-breadth data, the Fitbot we present can gradually enrich user's life modeling and enable the wearable robot to recognize user's intention and further understand the behavioral motivation behind the user's emotion. The learning algorithm for the life modeling embedded in Fitbot can achieve better user's experience of affective social interaction. Finally, the application service scenarios and some challenging issues of a wearable affective robot are discussed. <s> BIB005 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) HEALTH MONITORING <s> Abstract With the rapid development of medical and computer technologies, the healthcare system has seen a surge of interest from both the academia and industry. However, most healthcare systems fail to consider the emergency situations of patients, and are unable to provide a personalized resource service for special users. To address this issue, in this paper, we propose the Edge-Cognitive-Computing-based (ECC-based) smart-healthcare system. This system is able to monitor and analyze the physical health of users using cognitive computing. It also adjusts the computing resource allocation of the whole edge computing network comprehensively according to the health-risk grade of each user. The experiments show that the ECC-based healthcare system provides a better user experience and optimizes the computing resources reasonably, as well as significantly improving in the survival rates of patients in a sudden emergency. <s> BIB006 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) HEALTH MONITORING <s> Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability. <s> BIB007 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) HEALTH MONITORING <s> Human action monitoring can be advantageous to remotely monitor the status of patients or elderly person for intelligent healthcare. Human action recognition enables efficient and accurate monitoring of human behaviors, which can exhibit multifaceted complexity attributed to disparities in viewpoints, personality, resolution and motion speed of individuals, etc. The spatial-temporal information plays an important role in the human action recognition. In this paper, we proposed a novel deep learning architecture named as recurrent 3D convolutional neural network (R3D) to extract effective and discriminative spatial-temporal features to be used for action recognition, which enables the capturing of long-range temporal information by aggregating the 3D convolutional network entries to serve as an input to the LSTM (Long Short-Term Memory) architecture. The 3D convolutional network and LSTM are two effective methods for extracting the temporal information. The proposed R3D network integrated these two methods by sharing a shared 3D convolutional network in sliding windows on video streaming to capturing short-term spatial-temporal features into the LSTM. The output features of LSTM encapsulate the long-range spatial-temporal information representing high-level abstraction of the human actions. The proposed algorithm is compared to traditional and the-state-of-the-art and deep learning algorithms. The experimental results demonstrated the effectiveness of the proposed system, which can be used as smart monitoring for remote healthcare. <s> BIB008 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) HEALTH MONITORING <s> Computerized electrocardiogram (ECG) interpretation plays a critical role in the clinical ECG workflow1. Widely available digital ECG data and the algorithmic paradigm of deep learning2 present an opportunity to substantially improve the accuracy and scalability of automated ECG analysis. However, a comprehensive evaluation of an end-to-end deep learning approach for ECG analysis across a wide variety of diagnostic classes has not been previously reported. Here, we develop a deep neural network (DNN) to classify 12 rhythm classes using 91,232 single-lead ECGs from 53,549 patients who used a single-lead ambulatory ECG monitoring device. When validated against an independent test dataset annotated by a consensus committee of board-certified practicing cardiologists, the DNN achieved an average area under the receiver operating characteristic curve (ROC) of 0.97. The average F1 score, which is the harmonic mean of the positive predictive value and sensitivity, for the DNN (0.837) exceeded that of average cardiologists (0.780). With specificity fixed at the average specificity achieved by cardiologists, the sensitivity of the DNN exceeded the average cardiologist sensitivity for all rhythm classes. These findings demonstrate that an end-to-end deep learning approach can classify a broad range of distinct arrhythmias from single-lead ECGs with high diagnostic performance similar to that of cardiologists. If confirmed in clinical settings, this approach could reduce the rate of misdiagnosed computerized ECG interpretations and improve the efficiency of expert human ECG interpretation by accurately triaging or prioritizing the most urgent conditions. Analysis of electrocardiograms using an end-to-end deep learning approach can detect and classify cardiac arrhythmia with high accuracy, similar to that of cardiologists. <s> BIB009
|
Nowadays, sensor-equipped smartphones and wearables customarily enable a variety of mobile APPs for health monitoring BIB005 , BIB006 . To implement such applications, people utilize Human Activity Recognition (HAR) to identify human activities and analyze health conditions BIB007 . However, the underlying representative features hidden in the massive raw data calls for more effective extraction model for identification. Applying the advance of deep learning in activity recognition opens a promising opportunity towards this problem. Hammerla et al. BIB002 build CNNs and LSTM to analyze the movement data respectively and combine the results to make a better prediction of freezing gaits in Parkinson disease patients. Zhu et al. BIB001 apply the data from triaxial accelerometers and heart rate sensors to obtain promising results in predicting Energy Expenditure (EE) with a CNN model, which helps to relieve chronic diseases. Hannun et al. BIB009 train a 34-layer convolutional neural network which maps a sequence of ECG samples to a sequence of rhythm classes. The performance exceeds that of board certified cardiologists in detecting a wide range of heart arrhythmias from electrocardiograms recorded with a single-lead wearable monitor. Gao et al. BIB008 propose a novel deep learning architecture recurrent 3D convolutional neural network (R3D). R3D extracts effective and discriminative spatialtemporal features for action recognition, which enables the capturing of long-range temporal information by aggregating the 3D convolutional network entries to serve as an input to the LSTM architecture. With the prevalence of wearable devices, we can monitor our health state and standardize our way of life at any time. It is a challenge to directly deploy deep learning modules on low-power wearable devices due to their limited resources. Ravì et al. BIB003 utilize spectral domain preprocessing before the data are passed onto the deep learning framework so as to optimize real-time on-node computation in resource-limited devices. Tang et al. BIB004 explore two ways and successfully integrate deep learning with low-power IoT products.
|
A Survey on Deep Learning Empowered IoT Applications <s> 2) DISEASE ANALYSIS <s> Segmentation of anatomical structures in medical images is often based on a voxel/pixel classification approach. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images that fosters categorization. We propose a novel system for voxel classification integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D image, respectively. We applied our method to the segmentation of tibial cartilage in low field knee MRI scans and tested it on 114 unseen scans. Although our method uses only 2D features at a single scale, it performs better than a state-of-the-art method using 3D multi-scale features. In the latter approach, the features and the classifier have been carefully adapted to the problem at hand. That we were able to get better results by a deep learning architecture that autonomously learns the features from the images is the main insight of this study. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) DISEASE ANALYSIS <s> Importance ::: Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. ::: ::: ::: Objective ::: To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. ::: ::: ::: Design and Setting ::: A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. ::: ::: ::: Exposure ::: Deep learning-trained algorithm. ::: ::: ::: Main Outcomes and Measures ::: The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. ::: ::: ::: Results ::: The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0.990 (95% CI, 0.986-0.995) for Messidor-2. Using the first operating cut point with high specificity, for EyePACS-1, the sensitivity was 90.3% (95% CI, 87.5%-92.7%) and the specificity was 98.1% (95% CI, 97.8%-98.5%). For Messidor-2, the sensitivity was 87.0% (95% CI, 81.1%-91.0%) and the specificity was 98.5% (95% CI, 97.7%-99.1%). Using a second operating point with high sensitivity in the development set, for EyePACS-1 the sensitivity was 97.5% and specificity was 93.4% and for Messidor-2 the sensitivity was 96.1% and specificity was 93.9%. ::: ::: ::: Conclusions and Relevance ::: In this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) DISEASE ANALYSIS <s> Correct identification of prescription pills based on their visual appearance is a key step required to assure patient safety and facilitate more effective patient care. With the availability of high-quality cameras and computational power on smartphones, it is possible and helpful to identify unknown prescription pills using smartphones. Towards this goal, in 2016, the U.S. National Library of Medicine (NLM) of the National Institutes of Health (NIH) announced a nationwide competition, calling for the creation of a mobile vision system that can recognize pills automatically from a mobile phone picture under unconstrained real-world settings. In this paper, we present the design and evaluation of such mobile pill image recognition system called MobileDeepPill. The development of MobileDeepPill involves three key innovations: a triplet loss function which attains invariances to real-world noisiness that deteriorates the quality of pill images taken by mobile phones; a multi-CNNs model that collectively captures the shape, color and imprints characteristics of the pills; and a Knowledge Distillation-based deep model compression framework that significantly reduces the size of the multi-CNNs model without deteriorating its recognition performance. Our deep learning-based pill image recognition algorithm wins the First Prize (champion) of the NIH NLM Pill Image Recognition Challenge. Given its promising performance, we believe MobileDeepPill helps NIH tackle a critical problem with significant societal impact and will benefit millions of healthcare personnel and the general public. <s> BIB003 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) DISEASE ANALYSIS <s> The recent emergence of deep learning methods for medical image analysis has enabled the development of intelligent medical imaging-based diagnosis systems that can assist the human expert in making better decisions about a patients health. In this paper we focus on the problem of skin lesion classification, particularly early melanoma detection, and present a deep-learning based approach to solve the problem of classifying a dermoscopic image containing a skin lesion as malignant or benign. The proposed solution is built around the VGGNet convolutional neural network architecture and uses the transfer learning paradigm. Experimental results are encouraging: on the ISIC Archive dataset, the proposed method achieves a sensitivity value of 78.66%, which is significantly higher than the current state of the art on that dataset. <s> BIB004 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) DISEASE ANALYSIS <s> According to a report by the World Health Organization, diseases caused by an unhealthy lifestyle represent the leading cause of death all over the world. Therefore, it is crucial to monitor and avoid users' unhealthy behaviors. Existing health monitoring approaches still face many challenges of limited intelligence due to insufficient healthcare data. Therefore, this article proposes a smart personal health advisor (SPHA) for comprehensive and intelligent health monitoring and guidance. The SPHA monitors both physiological and psychological states of the user. The SPHAScore model is proposed to evaluate the overall health status of the user. Finally, a testbed for verification of feasibility and applicability of the proposed system was developed. The experimental and simulation results have shown that the proposed approach is efficient for proper user state monitoring. <s> BIB005 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) DISEASE ANALYSIS <s> Recent advances in wireless networking and big data technologies, such as 5G networks, medical big data analytics, and the Internet of Things, along with recent developments in wearable computing and artificial intelligence, are enabling the development and implementation of innovative diabetes monitoring systems and applications. Due to the life-long and systematic harm suffered by diabetes patients, it is critical to design effective methods for the diagnosis and treatment of diabetes. Based on our comprehensive investigation, this article classifies those methods into Diabetes 1.0 and Diabetes 2.0, which exhibit deficiencies in terms of networking and intelligence. Thus, our goal is to design a sustainable, cost-effective, and intelligent diabetes diagnosis solution with personalized treatment. In this article, we first propose the 5G-Smart Diabetes system, which combines the state-of-the-art technologies such as wearable 2.0, machine learning, and big data to generate comprehensive sensing and analysis for patients suffering from diabetes. Then we present the data sharing mechanism and personalized data analysis model for 5G-Smart Diabetes. Finally, we build a 5G-Smart Diabetes testbed that includes smart clothing, smartphone, and big data clouds. The experimental results show that our system can effectively provide personalized diagnosis and treatment suggestions to patients. <s> BIB006 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) DISEASE ANALYSIS <s> Smart city advancements are driving massive transformations of healthcare, the largest global industry. The drivers include increasing demands for ubiquitous, preventive, and personalized healthcare, to be provided to the public at reduced risks and costs. Mobile cloud computing could potentially meet the future healthcare demands by enabling anytime, anywhere capture and analyses of patients’ data. However, network latency, bandwidth, and reliability are among the many challenges hindering the realization of next-generation healthcare. This paper proposes a ubiquitous healthcare framework, UbeHealth, that leverages edge computing, deep learning, big data, high-performance computing (HPC), and the Internet of Things (IoT) to address the aforementioned challenges. The framework enables an enhanced network quality of service using its three main components and four layers. Deep learning, big data, and HPC are used to predict network traffic, which in turn are used by the Cloudlet and network layers to optimize data rates, data caching, and routing decisions. Application protocols of the traffic flows are classified, enabling the network layer to meet applications’ communication requirements better and to detect malicious traffic and anomalous data. Clustering is used to identify the different kinds of data originating from the same application protocols. A proof of concept UbeHealth system has been developed based on the framework. A detailed literature review is used to capture the design requirements for the proposed system. The system is described in detail including the algorithmic implementation of the three components and four layers. Three widely used data sets are used to evaluate the UbeHealth system. <s> BIB007 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) DISEASE ANALYSIS <s> This paper proposes an intelligent medicine recognition system based on deep learning techniques, named ST-Med-Box. The proposed system can assist chronic patients in taking multiple medications correctly and avoiding in taking the wrong medications, which may cause drug interactions, and can provide other medication-related functionalities such as reminders to take medications on time, medication information, and chronic patient information management. The proposed system consists of an intelligent medicine recognition device, an app running on an Android-based mobile device, a deep learning training server, and a cloud-based management platform. Currently, eight different medicines can be recognized by the proposed system. The experimental results show that the recognition accuracy reaches 96.6%. Therefore, the proposed system can effectively reduce the problem of drug interactions caused by taking incorrect drugs, thereby reducing the cost of medical treatment and giving patients with chronic diseases a safe medication environment. <s> BIB008
|
Medical image classification and analysis is an important topic in healthcare. Following the success in computer vision, deep learning has been widely used in assisting disease image analysis BIB005 , BIB006 . CNNs are used to infer a hierarchical representation of low-field knee MRI scans to automatically segment cartilage and predict the risk of osteoarthritis BIB001 . Another work BIB002 uses CNNs to identify diabetic retinopathy in retinal fundus photographs, obtaining high sensitivity and specificity over about 10, 000 test images with respect to certified ophthalmologist annotations. In addition to medical image recognition, deep learning has been employed in other applications. For instance, Zeng et al. BIB003 present a deep-learning based pill image recognition model which helps to identify unknown prescription pills using smartphones. Lopez et al. BIB004 propose a deep-learning-based approach to classify a dermotropic image which contains a skin lesion as malignant or benign. A ubiquitous healthcare framework called UbeHealth is proposed to address the challenges in terms of network latency, bandwidth, and reliability BIB007 . Chang et al. BIB008 propose an intelligent medicine recognition system called ST-Med-Box based on deep learning. ST-Med-Box can assist chronic patients in taking multiple medications correctly and avoiding in taking wrong medications; it also provides other medication-related functionalities such as reminders to take medications on time, medication information, and chronic patient information management.
|
A Survey on Deep Learning Empowered IoT Applications <s> 1) INDOOR LOCALIZATION <s> The proliferation of mobile computing devices and local-area wireless networks has fostered a growing interest in location-aware systems and services. In this paper we present RADAR, a radio-frequency (RF)-based system for locating and tracking users inside buildings. RADAR operates by recording and processing signal strength information at multiple base stations positioned to provide overlapping coverage in the area of interest. It combines empirical measurements with signal propagation modeling to determine user location and thereby enable location-aware services and applications. We present experimental results that demonstrate the ability of RADAR to estimate user location with a high degree of accuracy. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) INDOOR LOCALIZATION <s> Because of the advanced development in computer technology, home automation system could provide a variety of convenient and novel services to people. But only providing many kinds of services is not enough; instead, upgrading the quality of services is also a very important issue. One way to upgrade the service quality is to customize the service according to the inhabitant's personal situation, and the user location is the key information for the home automation system to customize the services. Another impact of the advanced computer technology is to make the personal digital device to commonly have the capability to communicate through the wireless networks, and the popularity of wireless networks in home has increased in recent years. As a result, home automation system can bring services to personal digital devices held by people through any wireless network, and customize the services according to the location of personal digital device in home. In this paper, we present a location determination system for the home automation system to provide location aware services. This location determination system uses support vector machine to classify the location of a wireless client from its signal strength measures, and we describe its architecture and discuss its performance. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) INDOOR LOCALIZATION <s> Estimating the location of people and tracking them in an indoor environment poses a fundamental challenge in ubiquitous computing. The accuracy of explicit positioning sensors such as GPS is often limited for indoor environments. In this study, we evaluate the feasibility of building an indoor location tracking system that is cost effective for large scale deployments, can operate over existing Wi-Fi networks, and can provide flexibility to accommodate new sensor observations as they become available. At the core of our system is a novel location and tracking algorithm using a sigma-point Kalman smoother (SPKS) based Bayesian inference approach. The proposed SPKS fuses a predictive model of human walking with a number of low-cost sensors to track 2D position and velocity. Available sensors include Wi-Fi received signal strength indication (RSSI), binary infrared (IR) motion sensors, and binary foot-switches. Wi-Fi signal strength is measured using a receiver tag developed by Ekahau Inc. The performance of the proposed algorithm is compared with a commercially available positioning engine, also developed by Ekahau Inc. The superior accuracy of our approach over a number of trials is demonstrated. <s> BIB003 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) INDOOR LOCALIZATION <s> This paper exploits recent developments in sparse approximation and compressive sensing to efficiently perform localization in wireless networks. Particularly, we re-formulate the localization problem as a sparse approximation problem using the compressive sensing theory that provides a new paradigm for recovering a sparse signal solving an l 1 minimization problem. The proposed received signal strength-based method does not require any time specific/propriatery hardware since the location estimation is performed at the Access Points (APs). The experimental results show that our proposed method, when compared with traditional localization schemes results in a better accuracy in terms of the mean localization error. <s> BIB004 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) INDOOR LOCALIZATION <s> Along with the proliferation of mobile devices and wireless signal coverage, indoor localization based on Wi-Fi gets great popularity. Fingerprint based method is the mainstream approach for Wi-Fi indoor localization, for it can achieve high localization performance as long as labeled data are sufficient. However, the number of labeled data is always limited due to the high cost of data acquisition. Nowadays, crowd sourcing becomes an effective approach to gather large number of data; meanwhile, most of them are unlabeled. Therefore, it is worth studying the use of unlabeled data to improve localization performance. To achieve this goal, a novel algorithm Semi-supervised Deep Extreme Learning Machine (SDELM) is proposed, which takes the advantages of semi-supervised learning, Deep Leaning (DL), and Extreme Learning Machine (ELM), so that the localization performance can be improved both in the feature extraction procedure and in the classifier. The experimental results in real indoor environments show that the proposed SDELM not only outperforms other compared methods but also reduces the calibration effort with the help of unlabeled data. <s> BIB005 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) INDOOR LOCALIZATION <s> With the fast-growing demand of location-based services in indoor environments, indoor positioning based on fingerprinting has attracted significant interest due to its high accuracy. In this paper, we present a novel deep-learning-based indoor fingerprinting system using channel state information (CSI), which is termed DeepFi. Based on three hypotheses on CSI, the DeepFi system architecture includes an offline training phase and an online localization phase. In the offline training phase, deep learning is utilized to train all the weights of a deep network as fingerprints. Moreover, a greedy learning algorithm is used to train the weights layer by layer to reduce complexity. In the online localization phase, we use a probabilistic method based on the radial basis function to obtain the estimated location. Experimental results are presented to confirm that DeepFi can effectively reduce location error, compared with three existing methods in two representative indoor environments. <s> BIB006 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) INDOOR LOCALIZATION <s> Device-free wireless localization and activity recognition (DFLAR) is a new technique, which could estimate the location and activity of a target by analyzing its shadowing effect on surrounding wireless links. This technique neither requires the target to be equipped with any device nor involves privacy concerns, which makes it an attractive and promising technique for many emerging smart applications. The key question of DFLAR is how to characterize the influence of the target on wireless signals. Existing work generally utilizes statistical features extracted from wireless signals, such as mean and variance in the time domain and energy as well as entropy in the frequency domain, to characterize the influence of the target. However, a feature suitable for distinguishing some activities or gestures may perform poorly when it is used to recognize other activities or gestures. Therefore, one has to manually design handcraft features for a specific application. Inspired by its excellent performance in extracting universal and discriminative features, in this paper, we propose a deep learning approach for realizing DFLAR. Specifically, we design a sparse autoencoder network to automatically learn discriminative features from the wireless signals and merge the learned features into a softmax-regression-based machine learning framework to realize location, activity, and gesture recognition simultaneously. Extensive experiments performed in a clutter indoor laboratory and an apartment with eight wireless nodes demonstrate that the DFLAR system using the learned features could achieve 0.85 or higher accuracy, which is better than the systems utilizing traditional handcraft features. <s> BIB007 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) INDOOR LOCALIZATION <s> Smart services are an important element of the smart cities and the Internet of Things (IoT) ecosystems where the intelligence behind the services is obtained and improved through the sensory data. Providing a large amount of training data is not always feasible; therefore, we need to consider alternative ways that incorporate unlabeled data as well. In recent years, deep reinforcement learning (DRL) has gained great success in several application domains. It is an applicable method for IoT and smart city scenarios where auto-generated data can be partially labeled by users’ feedback for training purposes. In this paper, we propose a semisupervised DRL model that fits smart city applications as it consumes both labeled and unlabeled data to improve the performance and accuracy of the learning agent. The model utilizes variational autoencoders as the inference engine for generalizing optimal policies. To the best of our knowledge, the proposed model is the first investigation that extends DRL to the semisupervised paradigm. As a case study of smart city applications, we focus on smart buildings and apply the proposed model to the problem of indoor localization based on Bluetooth low energy signal strength. Indoor localization is the main component of smart city services since people spend significant time in indoor environments. Our model learns the best action policies that lead to a close estimation of the target locations with an improvement of 23% in terms of distance to the target and at least 67% more received rewards compared to the supervised DRL model. <s> BIB008
|
With the proliferation of mobile devices, indoor localization gradually becomes a critical research issue since it is not viable to employ Global Positioning System (GPS) in indoor environments. Indoor localization enables numerous services in smart home, such as wireless intruder detection, elder monitoring, and baby monitoring, yet it faces a lot of propagation challenges like multi-path effect, fading, and delay distortion. High accuracy and short processing time are indispensable performance indicators while designing an indoor localization system. Fingerprinting-based indoor localization is an effective method to satisfy the above requirements. RSSI (Received Signal Strength Indication) based fingerprints are known to be unstable and inaccurate, and the more powerful Wi-Fi Channel State Information (CSI) have become the most widely adopted fingerprints in current systems. In addition, traditional positioning systems are based on such methods as K nearest neighbors (KNN) BIB001 , Bayesian model BIB003 , SVM BIB002 , and compressive sensing BIB004 , which are not suitable for dealing with massive data. To this end, people began to resort to deep neural networks. Gu et al. BIB005 propose a novel algorithm called Semisupervised Deep Extreme Learning Machine (SDELM), which takes the advantages of semi-supervised learning, deep learning, and extreme learning machine. This approach achieves satisfactory performance on the localization and reduces the calibration effort with the full use of unlabeled data. Mohammadi et al. BIB008 propose a semisupervised deep reinforcement learning (DRL) model based on Bluetooth low energy signal strengths. This model utilizes variational autoencoders as the inference engine for generalizing optimal policies. Wang et al. BIB006 utilize 4-layer RBMs to process the raw CSI data to obtain the locations. Yet the proposed system considers a device-oriented approach, which would not work if people have no cell phones or they refuse to connect their phones with APs. To this end, Wang et al. BIB007 develop a device-free approach based on an observation that APs receive different data when people stand at different locations. They design a 4-layer RBM model to extract features from the raw CSI data and select random forests (RF) to classify the locations by these features. In addition, they employ a contaminant estimation step to eliminate the error of CSI values in a fixed place due to multi-path effect caused by opening a window or door. Nine APs are employed to collect data related to people's locations, and a wavelet filter is utilized to preprocess the raw data. With the multi-faceted interaction, results are more robust. The system can even recognize people's activities like bow and walk, or gestures like hand-clap and wave hand.
|
A Survey on Deep Learning Empowered IoT Applications <s> 2) INTELLIGENT CONTROL <s> Abstract Mobile phones seem to present the perfect user interface for interacting with smart environments, e.g. smart-home systems, as they are nowadays ubiquitous and equipped with an increasing amount of sensors and interface components, such as multi-touch screens. After giving an overview on related work this paper presents the adapted design methodology proposed by Wobbrock et al. (2009) for the development of a gesture-based user interface to a smart-home system. The findings for the new domain, device and gesture space are presented and compared to findings by Wobbrock et al. (2009) . Three additional steps are described: A small pre-test survey, a mapping and a memory test and a performance test of the implemented system. This paper shows the adaptability of the approach described by Wobbrock et al. (2009) for three-dimensional gestures in the smart-home domain. Elicited gestures are described and a first implementation of a user interface based on these gestures is presented. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) INTELLIGENT CONTROL <s> Wireless sensor networks (WSNs) and power line communications (PLCs) are used in this work to implement a smart home control network. The goals are to reduce the impact of wireless interference on a smart home control network and unnecessary energy consumption of a smart home. An isolated WSN with one coordinator, which is integrated into the PLC transceiver, is established in each room. The coordinator is responsible for transferring environmental parameters obtained by WSNs to the management station via PLCs. The control messages for home appliances are directly transferred using PLCs rather than WSNs. According to the experimental results, the impact of wireless interference on the proposed smart home control network is substantially mitigated. Additionally, a smart control algorithm for lighting systems and an analysis of the illumination of a fluorescent lamp were presented. The energy saving of lighting systems relative to those without smart control was evaluated. Numerical results indicate that the electricity consumption on a sunny or cloudy day can be reduced by at least 40% under the smart control. Moreover, a prototype for the proposed smart home control network with the smart control algorithm was implemented. Experimental tests demonstrate that the proposed system for smart home control networks is practically feasible and performs well. <s> BIB002
|
Nowadays, home appliances can connect to the Internet and provide intelligent services. Li and Lin BIB002 utilize WSNs and power line communications (PLCs) to implement a smart home control network. To reduce the impact of wireless interference on the control network and the unnecessary energy consumption, an isolated WSN with one coordinator, which is integrated into the PLC transceiver, is established in each room. The coordinator is responsible for transferring environmental parameters obtained by WSNs to the management station via PLCs. The control messages for home appliances are directly transferred by PLCs rather than WSNs. The user interface is also an important research field for better user experience. The authors in BIB001 propose a gesture-based user interface for the development of a smart home system. Nowadays, deep learning techniques have shown great success in digital personal assistant products such as Microsoft's Cortana, Apple's Siri, Amazon Alexa, and Google Assistant . Such dialogue system based products would function as the next-generation smart home controller.
|
A Survey on Deep Learning Empowered IoT Applications <s> 3) HOME ROBOTICS <s> This paper is concerned with constructing a prototype smart home environment which has been built in the research building of Korea Institute of Industrial Technology (KITECH) to demonstrate the practicability of a robot-assisted future home environment. Key functionalities that a home service robot must provide are localization, navigation, object recognition and object handling. A considerable amount of research has been conducted to make the service robot perform these operations with its own sensors, actuators and a knowledge database. With all heavy sensors, actuators and a database, the robot could have performed the given tasks in a limited environment or showed the limited capabilities in a natural environment. We initiated a smart home environment project for light-weight service robots to provide reliable services by interacting with the environment through the wireless sensor networks. This environment consists of the following three main components: smart objects with an radio frequency identification (RFID) tag and smart appliances with sensor network functionality; the home server that connects smart devices as well as maintains information for reliable services; and the service robots that perform tasks in collaboration with the environment. In this paper, we introduce various types of smart devices which are developed for assisting the robot by providing sensing and actuation, and present our approach on the integration of these devices to construct the smart home environment. Finally, we discuss the future directions of our project. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 3) HOME ROBOTICS <s> We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing. <s> BIB002
|
With sensors, actuators and databases equipped, home robots can perform various tasks in home environments. In general, home service robots should have the key functionalities including the localization, navigation, map building, human-robot interaction, object recognition, and object handling BIB001 . Robotic navigation in GPS-denied environments requires case-specific approaches for controlling a mobile robot to any desired destinations. In , a new approach for autonomous navigation to identify markers or objects from images and videos is presented, using pattern recognition and machine learning techniques such as CNNs. Computational intelligence techniques are implemented along with the robot operating system and object positioning to navigate towards these objects and markers by using the RGB-depth camera. Multiple potential matching objects detected by the robot with deep neural network object detectors are displayed on a screen installed on the assisted robot to improve and evaluate Human-Robot Interaction (HRI). To improve the hand-eye coordination for the object handling, Levine et al. BIB002 train a large convolutional neural network to predict the probability that the task-space motion of the gripper results in successful grasps only using monocular camera images, independently of the camera calibration or the current robot pose.
|
A Survey on Deep Learning Empowered IoT Applications <s> 1) TRAFFIC FLOW PREDICTION <s> Most literature on short-term traffic flow forecasting focused mainly on normal, or non-incident, conditions and, hence, limited their applicability when traffic flow forecasting is most needed, i.e., incident and atypical conditions. Accurate prediction of short-term traffic flow under atypical conditions, such as vehicular crashes, inclement weather, work zone, and holidays, is crucial to effective and proactive traffic management systems in the context of intelligent transportation systems (ITS) and, more specifically, dynamic traffic assignment (DTA). To this end, this paper presents an application of a supervised statistical learning technique called Online Support Vector machine for Regression, or OL-SVR, for the prediction of short-term freeway traffic flow under both typical and atypical conditions. The OL-SVR model is compared with three well-known prediction models including Gaussian maximum likelihood (GML), Holt exponential smoothing, and artificial neural net models. The resultant performance comparisons suggest that GML, which relies heavily on the recurring characteristics of day-to-day traffic, performs slightly better than other models under typical traffic conditions, as demonstrated by previous studies. Yet OL-SVR is the best performer under non-recurring atypical traffic conditions. It appears that for deployed ITS systems that gear toward timely response to real-world atypical and incident situations, OL-SVR may be a better tool than GML. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) TRAFFIC FLOW PREDICTION <s> Traffic flow prediction is a fundamental problem in transportation modeling and management. Many existing approaches fail to provide favorable results due to being: 1) shallow in architecture; 2) hand engineered in features; and 3) separate in learning. In this paper we propose a deep architecture that consists of two parts, i.e., a deep belief network (DBN) at the bottom and a multitask regression layer at the top. A DBN is employed here for unsupervised feature learning. It can learn effective features for traffic flow prediction in an unsupervised fashion, which has been examined and found to be effective for many areas such as image and audio classification. To the best of our knowledge, this is the first paper that applies the deep learning approach to transportation research. To incorporate multitask learning (MTL) in our deep architecture, a multitask regression layer is used above the DBN for supervised prediction. We further investigate homogeneous MTL and heterogeneous MTL for traffic flow prediction. To take full advantage of weight sharing in our deep architecture, we propose a grouping method based on the weights in the top layer to make MTL more effective. Experiments on transportation data sets show good performance of our deep architecture. Abundant experiments show that our approach achieved close to 5% improvements over the state of the art. It is also presented that MTL can improve the generalization performance of shared tasks. These positive results demonstrate that deep learning and MTL are promising in transportation research. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) TRAFFIC FLOW PREDICTION <s> Accurate and timely traffic flow information is important for the successful deployment of intelligent transportation systems. Over the last few years, traffic data have been exploding, and we have truly entered the era of big data for transportation. Existing traffic flow prediction methods mainly use shallow traffic prediction models and are still unsatisfying for many real-world applications. This situation inspires us to rethink the traffic flow prediction problem based on deep architecture models with big traffic data. In this paper, a novel deep-learning-based traffic flow prediction method is proposed, which considers the spatial and temporal correlations inherently. A stacked autoencoder model is used to learn generic traffic flow features, and it is trained in a greedy layerwise fashion. To the best of our knowledge, this is the first time that a deep architecture model is applied using autoencoders as building blocks to represent traffic flow features for prediction. Moreover, experiments demonstrate that the proposed method for traffic flow prediction has superior performance. <s> BIB003 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) TRAFFIC FLOW PREDICTION <s> Abstract We develop a deep learning model to predict traffic flows. The main contribution is development of an architecture that combines a linear model that is fitted using l 1 regularization and a sequence of tanh layers. The challenge of predicting traffic flows are the sharp nonlinearities due to transitions between free flow, breakdown, recovery and congestion. We show that deep learning architectures can capture these nonlinear spatio-temporal effects. The first layer identifies spatio-temporal relations among predictors and other layers model nonlinear relations. We illustrate our methodology on road sensor data from Interstate I-55 and predict traffic flows during two special events; a Chicago Bears football game and an extreme snowstorm event. Both cases have sharp traffic flow regime changes, occurring very suddenly, and we show how deep learning provides precise short term traffic flow predictions. <s> BIB004 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) TRAFFIC FLOW PREDICTION <s> Short-term traffic forecast is one of the essential issues in intelligent transportation system. Accurate forecast result enables commuters make appropriate travel modes, travel routes, and departure time, which is meaningful in traffic management. To promote the forecast accuracy, a feasible way is to develop a more effective approach for traffic data analysis. The availability of abundant traffic data and computation power emerge in recent years, which motivates us to improve the accuracy of short-term traffic forecast via deep learning approaches. A novel traffic forecast model based on long short-term memory (LSTM) network is proposed. Different from conventional forecast models, the proposed LSTM network considers temporal-spatial correlation in traffic system via a two-dimensional network which is composed of many memory units. A comparison with other representative forecast models validates that the proposed LSTM network can achieve a better performance. <s> BIB005
|
Traffic flow prediction is a fundamental problem in transportation modeling and management as well as intelligent transportation system design, which nowadays heavily depends on the historical and real-time traffic data collected from all kinds of sensors, including inductive loops, cameras, crowd sourcing, social media and so on. To efficiently utilize such massive heterogeneous data, classical machine learning methods, e.g., SVM, would consume a lot of time and power-consuming computation resources. In addition, hand-engineered features are not enough for a satisfying accuracy due to the limitation of related prior knowledge. The authors in BIB001 propose an online-SVR method for short traffic flow prediction in typical and atypical conditions, where several SVM models need to be formed and consume a lot of memory resources. Recently, deep learning has drawn major attention from both academia and industry due to its ability to extract inherent features from data and exploit the rich amount of traffic data. Huang et al. BIB002 propose a DBN model to capture enough features from each part of road traffic networks. With the idea of multitask learning, these features from related roads and stations are grouped to explore the nature of the whole road traffic network and predict traffic flow. Lv et al. BIB003 propose an SAE (Stack of Autoencoders) model to extract features from historical data for prediction with these features. A lot of works have focused on utilizing deep learning for traffic and crowd flow prediction BIB005 , BIB004 .
|
A Survey on Deep Learning Empowered IoT Applications <s> 2) TRAFFIC MONITORING <s> We have developed an algorithm, referred to as spatio-temporal Markov random field, for traffic images at intersections. This algorithm models a tracking problem by determining the state of each pixel in an image and its transit, and how such states transit along both the x-y image axes as well as the time axes. Our algorithm is sufficiently robust to segment and track occluded vehicles at a high success rate of 93%-96%. This success has led to the development of an extendable robust event recognition system based on the hidden Markov model (HMM). The system learns various event behavior patterns of each vehicle in the HMM chains and then, using the output from the tracking system, identifies current event chains. The current system can recognize bumping, passing, and jamming. However, by including other event patterns in the training set, the system can be extended to recognize those other events, e.g., illegal U-turns or reckless driving. We have implemented this system, evaluated it using the tracking results, and demonstrated its effectiveness. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) TRAFFIC MONITORING <s> We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) TRAFFIC MONITORING <s> Most modern face recognition systems rely on a feature representation given by a hand-crafted image descriptor, such as Local Binary Patterns (LBP), and achieve improved performance by combining several such representations. In this paper, we propose deep learning as a natural source for obtaining additional, complementary representations. To learn features in high-resolution images, we make use of convolutional deep belief networks. Moreover, to take advantage of global structure in an object class, we develop local convolutional restricted Boltzmann machines, a novel convolutional learning model that exploits the global structure by not assuming stationarity of features across the image, while maintaining scalability and robustness to small misalignments. We also present a novel application of deep learning to descriptors other than pixel intensity values, such as LBP. In addition, we compare performance of networks trained using unsupervised learning against networks with random filters, and empirically show that learning weights not only is necessary for obtaining good multilayer representations, but also provides robustness to the choice of the network architecture parameters. Finally, we show that a recognition system using only representations obtained from deep learning can achieve comparable accuracy with a system using a combination of hand-crafted image descriptors. Moreover, by combining these representations, we achieve state-of-the-art results on a real-world face verification database. <s> BIB003 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) TRAFFIC MONITORING <s> We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For \(300 \times 300\) input, SSD achieves 74.3 % mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for \(512 \times 512\) input, SSD achieves 76.9 % mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https://github.com/weiliu89/caffe/tree/ssd. <s> BIB004 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) TRAFFIC MONITORING <s> We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. ::: Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset. <s> BIB005 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) TRAFFIC MONITORING <s> Deep neural networks, albeit their great success on feature learning in various computer vision tasks, are usually considered as impractical for online visual tracking, because they require very long training time and a large number of training samples. In this paper, we present an efficient and very robust tracking algorithm using a single convolutional neural network (CNN) for learning effective feature representations of the target object in a purely online manner. Our contributions are multifold. First, we introduce a novel truncated structural loss function that maintains as many training samples as possible and reduces the risk of tracking error accumulation. Second, we enhance the ordinary stochastic gradient descent approach in CNN training with a robust sample selection mechanism. The sampling mechanism randomly generates positive and negative samples from different temporal distributions, which are generated by taking the temporal relations and label noise into account. Finally, a lazy yet effective updating scheme is designed for CNN training. Equipped with this novel updating algorithm, the CNN model is robust to some long-existing difficulties in visual tracking, such as occlusion or incorrect detections, without loss of the effective adaption for significant appearance changes. In the experiment, our CNN tracker outperforms all compared state-of-the-art methods on two recently proposed benchmarks, which in total involve over 60 video sequences. The remarkable performance improvement over the existing trackers illustrates the superiority of the feature representations, which are learned purely online via the proposed deep learning framework. <s> BIB006 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) TRAFFIC MONITORING <s> This paper presents to the best of our knowledge the first end-to-end object tracking approach which directly maps from raw sensor input to object tracks in sensor space without requiring any feature engineering or system identification in the form of plant or sensor models. Specifically, our system accepts a stream of raw sensor data at one end and, in real-time, produces an estimate of the entire environment state at the output including even occluded objects. We achieve this by framing the problem as a deep learning task and exploit sequence models in the form of recurrent neural networks to learn a mapping from sensor measurements to object tracks. In particular, we propose a learning method based on a form of input dropout which allows learning in an unsupervised manner, only based on raw, occluded sensor data without access to ground-truth annotations. We demonstrate our approach using a synthetic dataset designed to mimic the task of tracking objects in 2D laser data -- as commonly encountered in robotics applications -- and show that it learns to track many dynamic objects despite occlusions and the presence of sensor noise. <s> BIB007 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) TRAFFIC MONITORING <s> With the development of intelligent applications (e.g., self-driving, real-time emotion recognition, etc), there are higher requirements for the cloud intelligence. However, cloud intelligence depends on the multi-modal data collected by user equipments (UEs). Due to the limited capacity of network bandwidth, offloading all data generated from the UEs to the remote cloud is impractical. Thus, in this article, we consider the challenging issue of achieving a certain level of cloud intelligence while reducing network traffic. In order to solve this problem, we design a traffic control algorithm based on label-less learning on the edge cloud, which is dubbed as LLTC. By the use of the limited computing and storage resources at edge cloud, LLTC evaluates the value of data, which will be offloaded. Specifically, we first give a statement of the problem and the system architecture. Then, we design the LLTC algorithm in detail. Finally, we set up the system testbed. Experimental results show that the proposed LLTC can guarantee the required cloud intelligence while minimizing the amount of data transmission. <s> BIB008 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) TRAFFIC MONITORING <s> This paper presents a novel approach for tracking static and dynamic objects for an autonomous vehicle operating in complex urban environments. Whereas traditional approaches for tracking often feature numerous hand-engineered stages, this method is learned end-to-end and can directly predict a fully unoccluded occupancy grid from raw laser input. We employ a recurrent neural network to capture the state and evolution of the environment, and train the model in an entirely unsupervised manner. In doing so, our use case compares to model-free, multi-object tracking although we do not explicitly perform the underlying data-association process. Further, we demonstrate that the underlying representation learned for the tracking task can be leveraged via inductive transfer to train an object detector in a data efficient manner. We motivate a number of architectural features and show the positive contribution of dilated convolutions, dynamic and static memory units to the task of tracking and classifying complex... <s> BIB009 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) TRAFFIC MONITORING <s> Vision-based detection of road accidents using traffic surveillance video is a highly desirable but challenging task. In this paper, we propose a novel framework for automatic detection of road accidents in surveillance videos. The proposed framework automatically learns feature representation from the spatiotemporal volumes of raw pixel intensity instead of traditional hand-crafted features. We consider the accident of the vehicles as an unusual incident. The proposed framework extracts deep representation using denoising autoencoders trained over the normal traffic videos. The possibility of an accident is determined based on the reconstruction error and the likelihood of the deep representation. For the likelihood of the deep representation, an unsupervised model is trained using one class support vector machine. Also, the intersection points of the vehicle’s trajectories are used to reduce the false alarm rate and increase the reliability of the overall system. We evaluated out proposed approach on real accident videos collected from the CCTV surveillance network of Hyderabad City in India. The experiments on these real accident videos demonstrate the efficacy of the proposed approach. <s> BIB010
|
One of the most attractive research fields in smart transportation is the development automated traffic monitoring systems, which play an important role in both reducing the workload of human operators and warning drivers of dangerous situations BIB001 , BIB008 . Traffic video analytics has become an important part of intelligent traffic monitoring systems. In the following we present how deep learning is applied to traffic video analytics from the three perspectives: object detection, object tracking, and face recognition. Object detection has been applied in a wide range of scenarios, such as pedestrian detection, on-road vehicle detection, and unattended object detection. Applying the deep convolutional neural network and multi-scale strategy has significantly improved the accuracy and speed - BIB004 . Ren et al. introduce a region proposal network (RPN) that shares full-image convolutional features with detection network, thus enabling nearly cost-free region proposals. Redmon et al. BIB005 recognize frame object detection as a regression problem to spatially separate bounding boxes and associate class probabilities. Liu et al. BIB004 discretize the output space of bounding boxes into a set of default boxes over different aspect ratios and scale per feature map location. Object tracking is intended to locate a target in a video sequence and give its location in the first frame, which has been applied in surveillance systems. It is important to automatically track suspected people or target vehicles for safety monitoring, urban flow management, and autonomous driving BIB006 , BIB002 . Vincent et al. BIB002 explore an original strategy for building deep networks based on stacking layers of denoising autoencoders, which are trained locally to denoise corrupted versions of their inputs. Li et al. BIB006 present an efficient and robust tracking algorithm by using a single CNN for learning effective feature representations of target object. To directly map from raw sensor input to object tracks in sensor space without requiring any feature engineering or system identification, the end-to-end object tracking approach has been proposed, where recurrent neural networks (RNN) is used BIB007 , BIB009 . Singh and Mohan BIB010 propose a framework for automatic detection of road accidents in surveillance videos, which uses a stacked denoising autoencoder (SDAE) to learn feature representation from the spatio-temporal volumes of raw pixel intensity instead of traditional hand-crafted features. Face recognition and detection techniques BIB003 - can be used to identify and track drivers and pedestrians.
|
A Survey on Deep Learning Empowered IoT Applications <s> 3) AUTONOMOUS DRIVING <s> We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 3) AUTONOMOUS DRIVING <s> Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road/traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> 3) AUTONOMOUS DRIVING <s> Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving. <s> BIB003 </s> A Survey on Deep Learning Empowered IoT Applications <s> 3) AUTONOMOUS DRIVING <s> Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or “temporally deep”, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they can be compositional in spatial and temporal “layers”. Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized. <s> BIB004 </s> A Survey on Deep Learning Empowered IoT Applications <s> 3) AUTONOMOUS DRIVING <s> We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. ::: The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. ::: Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. ::: We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS). <s> BIB005 </s> A Survey on Deep Learning Empowered IoT Applications <s> 3) AUTONOMOUS DRIVING <s> Object detection is a crucial task for autonomous driving. In addition to requiring high accuracy to ensure safety, object detection for autonomous driving also requires real-time inference speed to guarantee prompt vehicle control, as well as small model size and energy efficiency to enable embedded system deployment. ::: In this work, we propose SqueezeDet, a fully convolutional neural network for object detection that aims to simultaneously satisfy all of the above constraints. In our network, we use convolutional layers not only to extract feature maps but also as the output layer to compute bounding boxes and class probabilities. The detection pipeline of our model only contains a single forward pass of a neural network, thus it is extremely fast. Our model is fully-convolutional, which leads to a small model size and better energy efficiency. While achieving the same accuracy as previous baselines, our model is 30.4x smaller, 19.7x faster, and consumes 35.2x lower energy. The code is open-sourced at \url{this https URL}. <s> BIB006 </s> A Survey on Deep Learning Empowered IoT Applications <s> 3) AUTONOMOUS DRIVING <s> Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or simulation environment. We advocate learning a generic vehicle motion model from large scale crowd-sourced video data, and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state. Our model incorporates a novel FCN-LSTM architecture, which can be learned from large-scale crowd-sourced vehicle action data, and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm. We provide a novel large-scale dataset of crowd-sourced driving behavior suitable for training our model, and report results predicting the driver action on held out sequences across diverse conditions. <s> BIB007
|
Autonomous driving is a crucial part of city automation. There are two major paradigms for vision-based autonomous VOLUME 7, 2019 driving systems, the mediated perception approaches and behavior reflex approaches BIB002 . Systems based on mediated perception approaches compute a high-dimensional world representation. The idea of mediated perception approaches recognizes multiple driving-relevant objects BIB001 , BIB003 , such as lanes, traffic signs, traffic lights, cars, pedestrians, etc. Mediated perception approaches gain the state-of-the-art achievement in autonomous driving. However, most of these systems rely on high precision instruments,and bring unnecessarily high complexity and cost. Currently, autonomous driving systems focus more on real-time inference speed, small model size, and energy efficiency BIB006 . These selfdriving systems are trained by the driving videos to learn a map from input images to driving behaviors or constructs a direct map from the sensory input to a driving action. The authors in BIB005 train a convolutional neural network to map raw pixels from a single front-facing camera directly to steering commands. Inspired by language models, The authors in BIB007 put forward a learning-based approach which trains an end-to-end FCN-LSTM network to predict multi-modal discrete and continuous driving behaviors. The system learns from Long-term Recurrent Convolutional Network BIB004 and extracts the spatial and temporal connections of driving video.
|
A Survey on Deep Learning Empowered IoT Applications <s> 1) MANUFACTURE INSPECTION <s> This paper presents a surface inspection prototype of an automatic system for precision ground metallic surfaces, in this case bearing rolls. The surface reflectance properties are modeled and verified with optical experiments. The aim being to determine the optical arrangement for illumination and observation, where the contrast between errors and intact surface is maximized. A new adaptive threshold selection algorithm for segmentation is presented. Additionally, is included an evaluation of a large number of published sequential search algorithms for selection of the best subset of features for the classification with a comparison of their computational requirements. Finally, the results of classification for 540 flaw images are presented. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) MANUFACTURE INSPECTION <s> Modern inspection systems based on smart sensor technology like image processing and machine vision have been widely spread into several fields of industry such as process control, manufacturing, and robotics applications in factories. Machine learning for smart sensors is a key element for the visual inspection of parts on a product line that has been manually inspected by people. This paper proposes a method for automatic visual inspection of dirties, scratches, burrs, and wears on surface parts. Imaging analysis with CNN (Convolution Neural Network) of training samples is applied to confirm the defect’s existence in the target region of an image. In this paper, we have built and tested several types of deep networks of different depths and layer nodes to select adequate structure for surface defect inspection. A single CNN based network is enough to test several types of defects on textured and non-textured surfaces while conventional machine learning methods are separately applied according to type of each surface. Experiments for surface defects in real images prove the possibility for use of imaging sensors for detection of different types of defects. In terms of energy saving, the experiment result shows that proposed method has several advantages in time and cost saving and shows higher performance than traditional manpower inspection system. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> 1) MANUFACTURE INSPECTION <s> With the rapid development of Internet of things devices and network infrastructure, there have been a lot of sensors adopted in the industrial productions, resulting in a large size of data. One of the most popular examples is the manufacture inspection, which is to detect the defects of the products. In order to implement a robust inspection system with higher accuracy, we propose a deep learning based classification model in this paper, which can find the possible defective products. As there may be many assembly lines in one factory, one huge problem in this scenario is how to process such big data in real time. Therefore, we design our system with the concept of fog computing. By offloading the computation burden from the central server to the fog nodes, the system obtains the ability to deal with extremely large data. There are two obvious advantages in our system. The first one is that we adapt the convolutional neural network model to the fog computing environment, which significantly improves its computing efficiency. The other one is that we work out an inspection model, which can simultaneously indicate the defect type and its degree. The experiments well prove that the proposed method is robust and efficient. <s> BIB003
|
In order to accurately inspect and assess the quality of products, various visual inspection approaches, many of which are based on traditional machine learning techniques, have been proposed to extract representative features with expert knowledge so as to detect product defects in large scale production BIB001 . Recently, deep learning has become a powerful tool for visual inspection. The authors in BIB003 propose a deeplearning-based classification model to implement a robust inspection system. A CNN-based system is adapted to the fog computing environment, which significantly improves its computing efficiency. A generic CNN-based approach is proposed in BIB002 to extract patch features and predict defect areas via thresholding and segmenting for the tasks of surface integration inspection.
|
A Survey on Deep Learning Empowered IoT Applications <s> 2) FAULT ASSESSMENT <s> Feature extraction is an important step in conventional vibration-based fault diagnosis methods. However, the features are usually empirically extracted, leading to inconsistent performance. This paper presents a new automatic and intelligent fault diagnosis method based on convolution neural network. Firstly, the vibration signal is processed by wavelet transform into a multi-scale spectrogram image to manifest the fault characteristics. Next, the spectrogram image is directly fed into convolution neural network to learn the invariant representation for vibration signal and recognize the fault status for fault diagnosis. During model construction, rectifier neural activation function and dropout layer are incorporated into convolution neural network to improve the computational efficiency and model generalization. Training data is input into traditional convolutional neural network, ReLU network, Dropout network and enhanced convolutional neural network. The classification results are reached by inputting training data and test data. Then, comparison is made on the analytical results of the four networks to conclude that the preciseness of the classification result of the enhanced convolutional neural network achieves as high as 96%, 8% higher than traditional convolutional neural network. Through adjusting p, the holding probability of Dropout, 3 kinds of sparse neural networks are trained and the classification results are compared. It finds, when p=0.4, the enhanced convolutional neural network achieves the best classification performance, 5% and 4% higher than ReLU network and Dropout network respectively. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) FAULT ASSESSMENT <s> This paper proposes a novel continuous sparse autoencoder (CSAE) which can be used in unsupervised feature learning. The CSAE adds Gaussian stochastic unit into activation function to extract features of nonlinear data. In this paper, CSAE is applied to solve the problem of transformer fault recognition. Firstly, based on dissolved gas analysis method, IEC three ratios are calculated by the concentrations of dissolved gases. Then IEC three ratios data is normalized to reduce data singularity and improve training speed. Secondly, deep belief network is established by two layers of CSAE and one layer of back propagation (BP) network. Thirdly, CSAE is adopted to unsupervised training and getting features. Then BP network is used for supervised training and getting transformer fault. Finally, the experimental data from IEC TC 10 dataset aims to illustrate the effectiveness of the presented approach. Comparative experiments clearly show that CSAE can extract features from the original data, and achieve a superior correct differentiation rate on transformer fault diagnosis. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) FAULT ASSESSMENT <s> Intelligent fault diagnosis is a promising tool to deal with mechanical big data due to its ability in rapidly and efficiently processing collected signals and providing accurate diagnosis results. In traditional intelligent diagnosis methods, however, the features are manually extracted depending on prior knowledge and diagnostic expertise. Such processes take advantage of human ingenuity but are time-consuming and labor-intensive. Inspired by the idea of unsupervised feature learning that uses artificial intelligence techniques to learn features from raw data, a two-stage learning method is proposed for intelligent diagnosis of machines. In the first learning stage of the method, sparse filtering, an unsupervised two-layer neural network, is used to directly learn features from mechanical vibration signals. In the second stage, softmax regression is employed to classify the health conditions based on the learned features. The proposed method is validated by a motor bearing dataset and a locomotive bearing dataset, respectively. The results show that the proposed method obtains fairly high diagnosis accuracies and is superior to the existing methods for the motor bearing dataset. Because of learning features adaptively, the proposed method reduces the need of human labor and makes intelligent fault diagnosis handle big data more easily. <s> BIB003
|
In order to implement smart manufacturing, it is crucial for a smart factory to monitor machinery conditions, identify incipient defects, diagnose root cause of failures, and then incorporate the information into manufacturing production and control. In BIB001 , a wavelet-based CNN is proposed for automatic machinery fault diagnosis. The wavelet transform is used to transfer one-dimensional vibration signal into a two-dimensional one which is then fed into CNN. In BIB002 , a continuous sparse auto-encoder (CSAE) is presented by adding Gaussian stochastic unit into an activation function to extract nonlinear features of the input data. In BIB003 , a sparse filtering based two-layer neural network model is investigated for unsupervised feature learning, which is used to learn representative features from the mechanical vibration signals.
|
A Survey on Deep Learning Empowered IoT Applications <s> 2) MODEL TRAINING <s> High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) MODEL TRAINING <s> Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> 2) MODEL TRAINING <s> Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. <s> BIB003
|
Training a deep network demands cumbersome tasks. As we know, the depths determine the capacity of a deep learning network to extract key features. However, the gradient vanishment problem appears when models grow deeper, which deteriorates the performance. To this end, Hinton et al. BIB001 propose an approach to pre-train models by stacking RBMs. In addition, the ReLU function applied as a substitute for the sigmoid function also contributes to the mitigation of the gradient vanishment problem. Overfitting is another serious problem that we face in training deep models. The key solution is to enter more data or reduce parameters of the model. One effective method is using convolutional kernels to reduce the number of parameters, and employing the dropout BIB002 is also an alternative. Moreover, in recent years, a major breakthrough has been made in convolutional neural networks - BIB003 and the number of layers of CNN models has been increasing from 5 to more than 200. Methods mentioned in these classical convolutional neural networks (like implementing smaller convolutional kernels or batch normalization) can be valid when we utilize the deep learning algorithm to deal with problems in wireless network filed.
|
A Survey on Deep Learning Empowered IoT Applications <s> 4) SYSTEM DESIGN <s> With the increasing commoditization of computer vision, speech recognition and machine translation systems and the widespread deployment of learning-based back-end technologies such as digital advertising and intelligent infrastructures, AI (Artificial Intelligence) has moved from research labs to production. These changes have been made possible by unprecedented levels of data and computation, by methodological advances in machine learning, by innovations in systems software and architectures, and by the broad accessibility of these technologies. ::: The next generation of AI systems promises to accelerate these developments and increasingly impact our lives via frequent interactions and making (often mission-critical) decisions on our behalf, often in highly personalized contexts. Realizing this promise, however, raises daunting challenges. In particular, we need AI systems that make timely and safe decisions in unpredictable environments, that are robust against sophisticated adversaries, and that can process ever increasing amounts of data across organizations and individuals without compromising confidentiality. These challenges will be exacerbated by the end of the Moore's Law, which will constrain the amount of data these technologies can store and process. In this paper, we propose several open research directions in systems, architectures, and security that can address these challenges and help unlock AI's potential to improve lives and society. <s> BIB001 </s> A Survey on Deep Learning Empowered IoT Applications <s> 4) SYSTEM DESIGN <s> With the increasing popularity of location-based services (LBSs), it is of paramount importance to preserve one’s location privacy. The commonly used location privacy preserving approach, location ${k}$ -anonymity, strives to aggregate the queries of ${k}$ nearby users within a so-called cloaked region via a trusted third-party anonymizer. As such, the probability to identify the location of every user involved is no more than ${1/k}$ , thus offering privacy preservation for users. One inherent limitation of ${k}$ -anonymity, however, is that all users involved are assumed to be trusted and report their real locations. When location injection attacks (LIAs) are conducted, where the untrusted users inject fake locations (along with fake queries) to the anonymizer, the probability of disclosing one’s location privacy could be greatly more than ${1/k}$ , yielding a much higher risk of privacy leakage. To tackle this problem, in this paper we present ILLIA, the first work that enables ${k}$ -anonymity-based privacy preservation against LIA in continuous LBS queries. Central to the ILLIA idea is to explore the pattern of the users’ mobility in continuous LBS queries. With a thorough understanding of the users’ mobility similarity, a credibility-based ${k}$ -anonymity scheme is developed, such that ILLIA is able to defense against LIA without requiring in advance knowledge of how fake locations are manipulated while still maintaining high quality of services. Both the effectiveness and the efficiency of ILLIA are validated by extensive simulations on real world dataset loc-Gowalla. <s> BIB002 </s> A Survey on Deep Learning Empowered IoT Applications <s> 4) SYSTEM DESIGN <s> The cloud-based Internet of Things (IoT) develops rapidly but suffer from large latency and backhaul bandwidth requirement, the technology of fog computing and caching has emerged as a promising paradigm for IoT to provide proximity services, and thus reduce service latency and save backhaul bandwidth. However, the performance of the fog-enabled IoT depends on the intelligent and efficient management of various network resources, and consequently the synergy of caching, computing, and communications becomes the big challenge. This paper simultaneously tackles the issues of content caching strategy, computation offloading policy, and radio resource allocation, and propose a joint optimization solution for the fog-enabled IoT. Since wireless signals and service requests have stochastic properties, we use the actor–critic reinforcement learning framework to solve the joint decision-making problem with the objective of minimizing the average end-to-end delay. The deep neural network (DNN) is employed as the function approximator to estimate the value functions in the critic part due to the extremely large state and action space in our problem. The actor part uses another DNN to represent a parameterized stochastic policy and improves the policy with the help of the critic. Furthermore, the Natural policy gradient method is used to avoid converging to the local maximum. Using the numerical simulations, we demonstrate the learning capacity of the proposed algorithm and analyze the end-to-end service latency. <s> BIB003 </s> A Survey on Deep Learning Empowered IoT Applications <s> 4) SYSTEM DESIGN <s> Dynamic Adaptive Streaming over HTTP (DASH) has been widely adopted to deal with such user diversity as network conditions and device capabilities. In DASH systems, the computation-intensive transcoding is the key technology to enable video rate adaptation, and cloud has become a preferred solution for massive video transcoding. Yet the cloud-based solution has the following two drawbacks. First, a video stream now has multiple versions after transcoding, which increases the network traffic traversing the core network. Second, the transcoding strategy is normally fixed and thus is not flexible to adapt to the dynamic change of viewers. Considering that mobile users, who normally experience dynamic network conditions from time to time, have occupied a very large portion of the total users, adaptive wireless transcoding is of great importance. To this end, we propose an adaptive wireless video transcoding framework based on the emerging edge computing paradigm by deploying edge transcoding servers close to base stations. With this design, the core network only needs to send the source video stream to the edge transcoding server rather than one stream for each viewer, and thus the network traffic across the core network is significantly reduced. Meanwhile, our edge transcoding server cooperates with the base station to transcode videos at a finer granularity according to the obtained users’ channel conditions, which smartly adjusts the transcoding strategy to tackle with time-varying wireless channels. In order to improve the bandwidth utilization, we also develop efficient bandwidth adjustment algorithms that adaptively allocate the spectrum resources to individual mobile users. We validate the effectiveness of our proposed edge computing based framework through extensive simulations, which confirm the superiority of our framework. <s> BIB004
|
There emerges a trend to design a cloud-edge learning system that spans edge devices and the cloud. A cloud-edge system can leverage the edge to reduce latency, improve the safety and security, and implement intelligent data retention techniques BIB002 . Furthermore, it can leverage the cloud to share data across edge devices, train sophisticated computationintensive models, and take high-quality decisions BIB001 . Recently, there have been some studies on the combination of deep learning and edge computing BIB003 - BIB004 . Edge devices can be highly heterogeneous in terms of resource capabilities and software platforms, which make it complicated for application development. The update cycles of the hardware and software to edge devices are much slower than in the cloud. It is inconvenient to store all collected data because of the storage capacity slowing down BIB001 . It is highly desirable to address these problems and build a robust cloud-edge learning system.
|
Text-to-picture tools, systems, and approaches: a survey <s> Introduction <s> The KidsRoom is a perceptually-based, interactive, narrative playspace for children. Images, music, narration, light, and sound effects are used to transform a normal child's bedroom into a fantasy land where children are guided through a reactive adventure story. The fully automated system was designed with the following goals: (1) to keep the focus of user action and interaction in the physical and not virtual space; (2) to permit multiple, collaborating people to simultaneously engage in an interactive experience combining both real and virtual objects; (3) to use computer-vision algorithms to identify activity in the space without requiring the participants to wear any special clothing or devices; (4) to use narrative to constrain the perceptual recognition, and to use perceptual recognition to allow participants to drive the narrative; and (5) to create a truly immersive and interactive room environment. ::: ::: We believe the KidsRoom is the first multi-person, fully-automated, interactive, narrative environment ever constructed using non-encumbering sensors. This paper describes the KidsRoom, the technology that makes it work, and the issues that were raised during the system's development. 1 ::: ::: A demonstration of the project, which complements the material presented here and includes videos, images, and sounds from each part of the story is available at . <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> Introduction <s> This paper describes a system, called Interactive e-Hon, for helping children understand difficult content. It works by transforming text into an easily understandable storybook style with animation and dialogue. In this system, easy-to-understand content is created by a semantic tag generator through natural language processing, an animation generator using an animation achieve and animation tables, a dialogue generator using semantic tag information, and a story generator using the Soar AI engine. Through the results of an experiment, this paper describes the advantages of attracting interest, supporting and facilitating understanding, and improving parent-child communication by using Interactive e-Hon. <s> BIB002 </s> Text-to-picture tools, systems, and approaches: a survey <s> Introduction <s> Multimedia learning is the process of building mental representation from words associated with images. Due to the intuitiveness and vividness of visual illustration, many texts to picture systems have been proposed. However, we observe some common limitations in the existing systems, such as the retrieved pictures may not be suitable for educational purposes. Also, finding pedagogic illustrations still requires manual work, which is difficult and time-consuming. The commonly used systems based on the best keyword selection and the best sentence selection may suffer from loss of information. In this paper, we present an Arabic multimedia text-to-picture mobile learning system that is based on conceptual graph matching. Using a knowledge base, a conceptual graph is built from the text accompanied with the pictures in the multimedia repository as well as for the text entered by the user. Based on the matching scores of both conceptual graphs, matched pictures are assigned relative rankings. The proposed system demonstrated its effectiveness in the domain of Arabic stories, however, it can be easily shifted to any educational domain to yield pedagogical illustrations for organizational or institutional needs. Comparisons with the current state-of-the-art systems, based on the best keyword selection and the best sentence selection techniques, have demonstrated significant improvements in the performance. In addition, to facilitate educational needs, conceptual graph visualization and visual illustrative assessment modules are also developed. The conceptual graph visualization enables learners to discover relationships between words, and the visual illustrative assessment allows the system to automatically assess the performance of a learner. The profound user studies demonstrated the efficiency of the proposed multimedia learning system. <s> BIB003
|
A text-to-picture system is a system that automatically converts a natural language text into pictures representing the meaning of that text. The pictures can be static illustrations such as images or dynamic illustrations such as animations. Most of the very early work in text-to-have also been proposed, such as KidsRoom BIB001 34, BIB002 and CONFUCIUS , the latter entails an interactive, multimodal storytelling system. For simple stories, in , the author proposed a system that can assist readers with intellectual disabilities in improving their understanding of short texts. A recent multimedia text-to-picture mobile system for Arabic stories based on conceptual graph matching has been proposed in BIB003 . For every input text, matching scores are calculated based on the intersection between the conceptual graph of the best selected keywords/sentences from the input text and the conceptual graphs of the pictures; in turn, matched pictures are assigned relative rankings. The best picture is selected based on the maximum intersection between the two graphs.
|
Text-to-picture tools, systems, and approaches: a survey <s> Domain application of text-to-picture systems <s> Research conducted primarily during the 1970s and 1980s supported the assertion that carefully constructed text illustrations generally enhance learners' performance on a variety of text-dependent cognitive outcomes. Research conducted throughout the 1990s still strongly supports that assertion. The more recent research has extended pictures-in-text conclusions to alternative media and technological formats and has begun to explore more systematically the “whys,” “whens,” and “for whoms” of picture facilitation, in addition to the “whethers” and “how muchs.” Consideration is given here to both more and less conventional types of textbook illustration, with several “tenets for teachers” provided in relation to each type. <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> Domain application of text-to-picture systems <s> This paper introduces the use of Wikipedia as a resource for automatic keyword extraction and word sense disambiguation, and shows how this online encyclopedia can be used to achieve state-of-the-art results on both these tasks. The paper also shows how the two methods can be combined into a system able to automatically enrich a text with links to encyclopedic knowledge. Given an input document, the system identifies the important concepts in the text and automatically links these concepts to the corresponding Wikipedia pages. Evaluations of the system show that the automatic annotations are reliable and hardly distinguishable from manual annotations. <s> BIB002 </s> Text-to-picture tools, systems, and approaches: a survey <s> Domain application of text-to-picture systems <s> This paper addresses and evaluates the hypothesis that pictorial representations can be used to effectively convey simple sentences across language barriers. The paper makes two main contributions. First, it proposes an approach to augmenting dictionaries with illustrative images using volunteer contributions over the Web. The paper describes the PicNet illustrated dictionary, and evaluates the quality and quantity of the contributions collected through several online activities. Second, starting with this illustrated dictionary, the paper describes a system for the automatic construction of pictorial representations for simple sentences. Comparative evaluations show that a considerable amount of understanding can be achieved using visual descriptions of information, with evaluation figures within a comparable range of those obtained with linguistic representations produced by an automatic machine translation system. <s> BIB003 </s> Text-to-picture tools, systems, and approaches: a survey <s> Domain application of text-to-picture systems <s> Autism and dyslexia are both developmental disorders of neural origin. As we still do not understand the neural basis of these disorders fully, technology can take two approaches in helping those affected. The first is to compensate externally for a known difficulty and the other is to achieve the same function using a completely different means. To demonstrate the first option, we are developing a system to compensate for the auditory processing difficulties in case of dyslexia and to demonstrate the second option we propose a system for autism where we remove the need for traditional languages and instead use pictures for communication. <s> BIB004 </s> Text-to-picture tools, systems, and approaches: a survey <s> Domain application of text-to-picture systems <s> Textbooks have a direct bearing on the quality of education imparted to the students. Therefore, it is of paramount importance that the educational content of textbooks should provide rich learning experience to the students. Recent studies on understanding learning behavior suggest that the incorporation of digital visual material can greatly enhance learning. However, textbooks used in many developing regions are largely text-oriented and lack good visual material. We propose techniques for finding images from the web that are most relevant for augmenting a section of the textbook, while respecting the constraint that the same image is not repeated in different sections of the same chapter. We devise a rigorous formulation of the image assignment problem and present a polynomial time algorithm for solving the problem optimally. We also present two image mining algorithms that utilize orthogonal signals and hence obtain different sets of relevant images. Finally, we provide an ensembling algorithm for combining the assignments. To empirically evaluate our techniques, we use a corpus of high school textbooks in use in India. Our user study utilizing the Amazon Mechanical Turk platform indicates that the proposed techniques are able to obtain images that can help increase the understanding of the textbook material. <s> BIB005 </s> Text-to-picture tools, systems, and approaches: a survey <s> Domain application of text-to-picture systems <s> Instant messaging service is an important aspect of social media and sprung up in last decades. Traditional instant messaging service transfers information mainly based on textual message, while the visual message is ignored to a great extent. Such instant messaging service is thus far from satisfactory in all-around information communication. In this paper, we propose a novel visual assisted instant messaging scheme named Chat with illustration (CWI), which presents users visual messages associated with textual message automatically. When users start their chat, the system first identifies meaningful keywords from dialogue content and analyzes grammatical and logical relations. Then CWI explores keyword-based image search on a hierarchically clustering image database which is built offline. Finally, according to grammatical and logical relations, CWI assembles these images properly and presents an optimal visual message. With the combination of textual and visual message, users could get a more interesting and vivid communication experience. Especially for different native language speakers, CWI can help them cross language barrier to some degree. In addition, a visual dialogue summarization is also proposed, which help users recall the past dialogue. The in-depth user studies demonstrate the effectiveness of our visual assisted instant messaging scheme. <s> BIB006 </s> Text-to-picture tools, systems, and approaches: a survey <s> Domain application of text-to-picture systems <s> We outline the design of a visualizer, named Vishit, for texts in the Hindi language. The Hindi language is lingua franca in many states of India where people speak different languages. The visualized text serves as a universal language where seamless communication is needed by many people who speak different languages and have different cultures. Vishit consists of the following three major processing steps: language processing, knowledge base creation and scene generation. Initial results from the Vishit prototype are encouraging. <s> BIB007 </s> Text-to-picture tools, systems, and approaches: a survey <s> Domain application of text-to-picture systems <s> Multimedia learning is the process of building mental representation from words associated with images. Due to the intuitiveness and vividness of visual illustration, many texts to picture systems have been proposed. However, we observe some common limitations in the existing systems, such as the retrieved pictures may not be suitable for educational purposes. Also, finding pedagogic illustrations still requires manual work, which is difficult and time-consuming. The commonly used systems based on the best keyword selection and the best sentence selection may suffer from loss of information. In this paper, we present an Arabic multimedia text-to-picture mobile learning system that is based on conceptual graph matching. Using a knowledge base, a conceptual graph is built from the text accompanied with the pictures in the multimedia repository as well as for the text entered by the user. Based on the matching scores of both conceptual graphs, matched pictures are assigned relative rankings. The proposed system demonstrated its effectiveness in the domain of Arabic stories, however, it can be easily shifted to any educational domain to yield pedagogical illustrations for organizational or institutional needs. Comparisons with the current state-of-the-art systems, based on the best keyword selection and the best sentence selection techniques, have demonstrated significant improvements in the performance. In addition, to facilitate educational needs, conceptual graph visualization and visual illustrative assessment modules are also developed. The conceptual graph visualization enables learners to discover relationships between words, and the visual illustrative assessment allows the system to automatically assess the performance of a learner. The profound user studies demonstrated the efficiency of the proposed multimedia learning system. <s> BIB008 </s> Text-to-picture tools, systems, and approaches: a survey <s> Domain application of text-to-picture systems <s> As advanced technology emerges into learning, learning behavior has changed from printed books to e-books with diversified teaching plans, such as picture E-books. Although past research regards picture E-books as successful and effective for children, some studied have reported that the electronic feature may negatively affect children. Briefly, as children are attracted by the games or sounds in electronic books, this paper intends to fill the gap with the augmented reality (AR) technology. The imaginative capability is an important factor to stimulate potential and inspire creativity. The research field of imaginative capability is a critical element of the effect student creativity development in the future; however, to the best of our knowledge, little attention has been focused on imaginative capability with technology, to say nothing of learning in ubiquitous learning environments. To cope with this problem, this paper aims to present a picture E-book based on the AR technology and learning theories to build a learner-centered u-learning environment, and examines how to inspire students’ imaginative capability in three ways: text-based traditional learning, picture-based traditional learning, and picture E-books with the AR technology–in order to determine the differences of students’ learning motivations and imaginative capabilities. <s> BIB009 </s> Text-to-picture tools, systems, and approaches: a survey <s> Domain application of text-to-picture systems <s> Current Electronic Medical Records (EMR) systems contain large amounts of texts and various tables, to show numerous health data. This type of presentation limits people from promptly determining medical conditions or quickly finding desired information given the large volume of texts that needs to be read. We aim to tackle this as information visualization and extraction problems by creation of easy and intuitive user interfaces for visualizing medical information. We present both a novel graphical interface for visualizing a summary of medical information and an information extraction system that is able to extract and visualize the patient’s medical information from structured clinical notes. The graphical interface allows spatial-position based representations of medical information on human body images (front and back views) and temporal-time based representation of it through interconnected time axes. Medical histories are classified into several event categories and 6 physiological systems to enable efficient browsing of selected information. To extract visual tags from a given clinical note, we use natural language processing. We employ Metamap of 2014AA knowledge source for medical information extraction. We trained 1294 English clinical notes with a Time-Entity Detection model by Apache Open NLP to abstract the time expressions. Extracted location of illness is assigned into one of 6 physiological systems is displayed in spatial interface while the related data are denoted on a horizontal timeline of temporal interface. <s> BIB010
|
In text-to-picture systems, the visualized text can serve as a universal language for many applications such as education, language learning, literacy development, summarization of news articles, storytelling, data visualization, games, visual chat, rehabilitation of people with cerebral injuries, and children with delayed development. In the fields of education, language learning, and literacy development, an empirical study BIB001 strongly argues that text illustration with pictures generally enhances learners' performance and plays a significant role in a variety of text-based cognitive outcomes. For instance, an Arabic multimedia mobile educational system BIB008 has been proposed that allows users to access learning materials and mine illustrative pictures for sentences. Moreover, it has been shown in BIB002 that representing and linking text to pictures can be very helpful for people to rapidly acquire knowledge and reduce the time needed to obtain such knowledge . Language learning for children or for those who study a foreign language can also be improved through pictures BIB003 . Recently, studies on understanding learning behavior have suggested that the incorporation of digital visual material can greatly enhance learning BIB005 and promote imaginative capabilities, which is an important factor in inspiring creativity, as argued in BIB009 . In addition, the ability to encode information using pictures has benefits such as enabling communication to and from preliterate or non-literate people BIB006 , improved language comprehension for people with language disorders, as argued in BIB003 , and communication with children with autism BIB004 . Visualization and summarization of long text documents for rapid browsing, applications in literacy development BIB007 , and electronic medical records BIB010 are also currently required. For instance, MindMapping, a well-known technique for taking notes and learning, has been introduced in work as a multi-level visualization concept that takes a text input and generates its corresponding MindMap visualization. Yet, developing a text-to-picture system involves various requirements and challenges. The next section reviews some difficulties and challenges in developing text-to-picture systems.
|
Text-to-picture tools, systems, and approaches: a survey <s> Natural language understanding <s> The NewsViz system aims to enhance news reading experiences by integrating 30 seconds long Flash-animations into news article web pages depicting their content and emotional aspects. NewsViz interprets football match news texts automatically and creates abstract 2D visualizations. The user interface enables animators to further refine the animations. Here, we focus on the emotion extraction component of NewsViz which facilitates subtle background visualization. NewsViz detects moods from news reports. The original text is part-of-speech tagged and adjectives and/or nouns, the word types conveying most emotional meaning, are filtered out and labeled with an emotion and intensity value. Subsequently reoccurring emotions are joined into longer lasting moods and matched with appropriate animation presets. Different linguistic analysis methods were tested on NewsViz: word-by-word, sentence-based and minimum threshold summarization, to find a minimum number of occurrences of an emotion in forming a valid mood. NewsViz proved to be viable for the fixed domain of football news, grasping the overall moods and some more detailed emotions precisely. NewsViz offers an efficient technique to cater for the production of a large number of daily updated news stories. NewsViz bypasses the lack of information for background or environment depiction encountered in similar applications. Further development may refine the detection of emotion shifts through summarization with the full implementation of football and common linguistic knowledge. <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> Natural language understanding <s> A natural language interface exploits the conceptual simplicity and naturalness of the language to create a high-level user-friendly communication channel between humans and machines. One of the promising applications of such interfaces is generating visual interpretations of semantic content of a given natural language that can be then visualized either as a static scene or a dynamic animation. This survey discusses requirements and challenges of developing such systems and reports 26 graphical systems that exploit natural language interfaces and addresses both artificial intelligence and visualization aspects. This work serves as a frame of reference to researchers and to enable further advances in the field. <s> BIB002
|
Natural language understanding usually results in transforming natural languages from one representation into another . Mapping must be developed in order to disambiguate a description, discover the hidden semantics within it, and convert it into a formal knowledge representation (i.e., semantic representation). This task presents a fundamental challenge; for more details, a brief overview of NLU issues can be found in . Furthermore, language rarely mentions common-sense facts about a world that contains critically important spatial knowledge and negation BIB002 . Enabling a machine to understand natural language, which is variable, ambiguous, and imprecise, also involves feeding the machine grammatical structures (e.g., parts of speech), semantic relationships (e.g., emotional value and intensity), and visual descriptions (e.g., colors and motion direction) in order for it to match the language with suitable graphics BIB001 . The following figure (Fig. 1) shows the terminology of NLU and natural language processing (NLP) .
|
Text-to-picture tools, systems, and approaches: a survey <s> Loose image-text association <s> In this paper, we approach the task of finding suitable images to illustrate text, from specific news stories to more generic blog entries. We have developed an automatic illustration system supported by multimedia information retrieval, that analyzes text and presents a list of candidate images to illustrate it. The system was tested on the SAPO-Labs media collection, containing almost two million images with short descriptions, and the MIRFlickr-25000 collection, with photos and user tags from Flickr. Visual content is described by the Joint Composite Descriptor and indexed by a Permutation-Prefix Index. Illustration is a three-stage process using textual search, score filtering and visual clustering. A preliminary evaluation using exhaustive and approximate visual searches demonstrates the capabilities of the visual descriptor and approximate indexing scheme used. <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> Loose image-text association <s> Nowadays, the amount of multimedia contents in microblogs is growing significantly. More than 20p of microblogs link to a picture or video in certain large systems. The rich semantics in microblogs provides an opportunity to endow images with higher-level semantics beyond object labels. However, this raises new challenges for understanding the association between multimodal multimedia contents in multimedia-rich microblogs. Disobeying the fundamental assumptions of traditional annotation, tagging, and retrieval systems, pictures and words in multimedia-rich microblogs are loosely associated and a correspondence between pictures and words cannot be established. To address the aforementioned challenges, we present the first study analyzing and modeling the associations between multimodal contents in microblog streams, aiming to discover multimodal topics from microblogs by establishing correspondences between pictures and words in microblogs. We first use a data-driven approach to analyze the new characteristics of the words, pictures, and their association types in microblogs. We then propose a novel generative model called the Bilateral Correspondence Latent Dirichlet Allocation (BC-LDA) model. Our BC-LDA model can assign flexible associations between pictures and words and is able to not only allow picture-word co-occurrence with bilateral directions, but also single modal association. This flexible association can best fit the data distribution, so that the model can discover various types of joint topics and generate pictures and words with the topics accordingly. We evaluate this model extensively on a large-scale real multimedia-rich microblogs dataset. We demonstrate the advantages of the proposed model in several application scenarios, including image tagging, text illustration, and topic discovery. The experimental results demonstrate that our proposed model can significantly and consistently outperform traditional approaches. <s> BIB002 </s> Text-to-picture tools, systems, and approaches: a survey <s> Loose image-text association <s> In the era of information overloading, information retrieval systems are vital applications. Many researchers try to enhance the search results by introducing new methods. Unlike the English language, some languages like Arabic have complex morphological aspects and lack both linguistic and semantic resources. This paper proposes a language-independent semantic-based information retrieval approach, which expands the user query using bi-gram term collocations. The proposed approach has two main contributions. First, the bi-gram term collocations employed to expand the user query are automatically mined from the text corpus, therefore there is no need for an external semantic resource. Second, due to the complexity of the language morphology, the system index is constructed using the corpus words to save the cost and effort of the stemming process. A system prototype for the Arabic language was implemented and evaluated versus the stem-based method. The experimental evaluation has been conducted on the scripts of the Arabic Holy Quran. The evaluation results demonstrate that the proposed system outperforms the stem-based method in terms of precision and recall. <s> BIB003 </s> Text-to-picture tools, systems, and approaches: a survey <s> Loose image-text association <s> Traditional Arabic text summarization (ATS) systems are based on bag-of-words representation, which involve a sparse and high-dimensional input data. Thus, dimensionality reduction is greatly needed to increase the power of features discrimination. In this paper, we present a new method for ATS using variational auto-encoder (VAE) model to learn a feature space from a high-dimensional input data. We explore several input representations such as term frequency (tf), tf-idf and both local and global vocabularies. All sentences are ranked according to the latent representation produced by the VAE. We investigate the impact of using VAE with two summarization approaches, graph-based and query-based approaches. Experiments on two benchmark datasets specifically designed for ATS show that the VAE using tf-idf representation of global vocabularies clearly provides a more discriminative feature space and improves the recall of other models. Experiment results confirm that the proposed method leads to better performance than most of the state-of-the-art extractive summarization approaches for both graph-based and query-based summarization approaches. <s> BIB004
|
With the increased upload and usage of multimedia content, the pictures and texts involved can be loosely associated, and correspondence between pictures and texts cannot always be established, as highlighted in BIB002 . As a result, the association between pictures and texts in multimedia contexts can hardly be established using traditional methods, since the scale of the text alone can cover the entire natural language vocabulary. Therefore, there is a need for more powerful methods and techniques. The authors in BIB001 believe that there is now an increased difficulty in managing large multimedia sources to explore and retrieve relevant information. Unlike the English language, the Arabic language has complex morphological aspects and lacks both linguistic and semantic resources BIB003 BIB004 ], yet another challenge to be addressed accordingly in image-text association.
|
Text-to-picture tools, systems, and approaches: a survey <s> Motivation for the survey <s> Image classification and annotation are important problems in computer vision, but rarely considered together. Intuitively, annotations provide evidence for the class label, and the class label provides evidence for annotations. For example, an image of class highway is more likely annotated with words “road,” “car,” and “traffic” than words “fish,” “boat,” and “scuba.” In this paper, we develop a new probabilistic model for jointly modeling the image, its class label, and its annotations. Our model treats the class label as a global description of the image, and treats annotation terms as local descriptions of parts of the image. Its underlying probabilistic assumptions naturally integrate these two sources of information. We derive an approximate inference and estimation algorithms based on variational methods, as well as efficient approximations for classifying and annotating new images. We examine the performance of our model on two real-world image data sets, illustrating that a single model provides competitive annotation performance, and superior classification performance. <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> Motivation for the survey <s> Automatic description generation from natural images is a challenging problem that has recently received a large amount of interest from the computer vision and natural language processing communities. In this survey, we classify the existing approaches based on how they conceptualize this problem, viz., models that cast description as either generation problem or as a retrieval problem over a visual or multimodal representational space. We provide a detailed review of existing models, highlighting their advantages and disadvantages. Moreover, we give an overview of the benchmark image datasets and the evaluation measures that have been developed to assess the quality of machine-generated image descriptions. Finally we extrapolate future directions in the area of automatic image description generation. <s> BIB002 </s> Text-to-picture tools, systems, and approaches: a survey <s> Motivation for the survey <s> Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. Finally, given the recent surge of interest in this task, a competition was organized in 2015 using the newly released COCO dataset. We describe and analyze the various improvements we applied to our own baseline and show the resulting performance in the competition, which we won ex-aequo with a team from Microsoft Research. <s> BIB003 </s> Text-to-picture tools, systems, and approaches: a survey <s> Motivation for the survey <s> A natural language interface exploits the conceptual simplicity and naturalness of the language to create a high-level user-friendly communication channel between humans and machines. One of the promising applications of such interfaces is generating visual interpretations of semantic content of a given natural language that can be then visualized either as a static scene or a dynamic animation. This survey discusses requirements and challenges of developing such systems and reports 26 graphical systems that exploit natural language interfaces and addresses both artificial intelligence and visualization aspects. This work serves as a frame of reference to researchers and to enable further advances in the field. <s> BIB004 </s> Text-to-picture tools, systems, and approaches: a survey <s> Motivation for the survey <s> The vast array of information available on the Web makes it a challenge for readers to quickly browse through and decide about the importance and relevance of content. Interpreting large-volumes of data is particularly demanding for users with handheld devices in the social media and micro-blogging sphere. Various approaches address this challenge through text summarization, content ranking and personalized recommendation. We describe a family of techniques that help users understand text by automatically annotating text with pictures, referred to as text picturing. The objective is to find a set of pictures that cover the main concepts in a textual snippet. We provide an overview of text picturing, its constituent steps such as knowledge extraction, mapping, scene rendering, as well as application areas. We give a picturing-related literature overview, and list use-cases that offer IT professionals insight into how picturing techniques can be successfully incorporated into real world applications. <s> BIB005 </s> Text-to-picture tools, systems, and approaches: a survey <s> Motivation for the survey <s> We describe the investigation of automatic annotation of text with pictures, where knowledge extraction uses dependency parsing. Annotation of text with pictures, a form of knowledge visualization, can assist understanding. The problem statement is, given a corpus of images and a short passage of text, extract knowledge (or concepts), and then display that knowledge in pictures along with the text to help with understanding. A proposed solution framework includes a component to extract document concepts, a component to match document concepts with picture metadata, and a component to produce an amalgamated output of text and pictures. A proof-of-concept application based on the proposed framework provides encouraging results <s> BIB006 </s> Text-to-picture tools, systems, and approaches: a survey <s> Motivation for the survey <s> Affective image classification has drawn increasing research attentions in the affective computing and multimedia communities. Despite many solutions proposed in the literature, it remains a major challenge to bridge the semantic gap between visual features of images and their affective characteristics, partly due to the lack of adequate training samples, which can be largely ascribed to the all-consuming nature of affective image annotation. In this paper, we propose a novel affective image classification algorithm based on semi-supervised learning from web images (SSL-WI). This algorithm consists of four major steps, including color and texture feature extraction, baseline classifier construction, feature selection, and jointly using training images and retrieved web images to re-train the classifier. We have applied this algorithm, the baseline classifier that is not trained by web images, and two state-of-the-art algorithms to differentiating color images in a three-dimensional discrete emotional space. Our results suggest that, with the scheme of semi-supervised learning from web images, the proposed algorithm is able to produce more accurate affective image classification than other three approaches. <s> BIB007 </s> Text-to-picture tools, systems, and approaches: a survey <s> Motivation for the survey <s> Automatic image annotation(AIA) methods are considered as a kind of efficient schemes to solve the problem of semantic-gap between the original images and their semantic information. However, traditional annotation models work well only with finely crafted manual features. To address this problem, we combined the CNN feature of an image into our proposed model which we referred as SEM by using a famous CNN model-AlexNet. We extracted a CNN feature by removing its final layer and it is proved to be useful in our SEM model. Additionally, based on the experience of the traditional KNN models, we propose a model to address the problem of simultaneously addressing the image tag refinement and assignment while maintaining the simplicity of the KNN model. The proposed model divides the images which have similar features into a semantic neighbor group. Moreover, utilizing a self-defined Bayesian-based model, we distribute the tags which belong to the neighbor group to the test images according to the distance between the test image and the neighbors. At last, the experiments are performed on three typical image datasets corel5k, espGame and laprtc12, which verify the effectiveness of the proposed model. <s> BIB008
|
In the previous section, we highlighted the importance of text-to-picture systems in different contexts. Nowadays, such systems are still needed, since they have demonstrated their effectiveness at helping users to communicate, learn, and interpret textual information efficiently BIB005 . In particular, in situations constrained by time, capability, or technology, these systems have demonstrated the ability to clarify meanings and improve learning outcomes in cases including students with cognitive deficiencies, learning disabilities, or learning difficulties BIB002 . Nevertheless, many everyday tasks require a combination of textual and visual information (e.g., understanding slides while listening to a lecture in a classroom or reading and understanding a story). Hence, this survey reviews wellknown text-to-picture systems, tools, and approaches in order to investigate their performance and limitations. Our research study is also motivated by the emerging techniques in NLP tools, computer vision, and their combination, which have proven to have made great advances toward their respective goals of analyzing and generating text, and the understanding of images and videos. In the past, text-to-picture systems used to be viewed as a translation approach from a text language to a visual language BIB006 with excessive manual efforts. Nowadays, text-to-picture systems are being seen as information retrieval systems BIB005 , which intensively involve emerging deep learning techniques, specifically Web image classification BIB001 BIB007 , generic data classification , image annotation BIB008 , image feature extraction, and image captioning BIB003 . Therefore, automatic text illustration with multimedia systems has become more feasible even with minimal manual effort due to the massive availability of both Web multimedia content and open-resource tools. However, the feasibility of such systems requires a combination of different powerful techniques from different research areas to produce accurate results. In particular, with the new advances in deep convolutional neural networks and long short-term memory, neural networks are gradually enhancing these areas of research, and there are promising signs for developing successful text-to-picture systems. Although there are many working systems and applications that automatically generate images from a given sentence, text-to-picture systems for Arabic text are limited. Hence, more studies, reviews, and tools for analyzing Arabic sentences are required to recognize the potential for automatic Arabic text illustration and to open new horizons for research into the Arabic language in general. Indeed, a key objective of this review is to investigate the feasibility of Bautomatic visualization of Arabic story text through multimedia^using available tools and resources, particularly the automatic mapping of Arabic text to multimedia using Arabic language processing capabilities, and developing a successful text-to-picture system for educational purposes. It is also important to mention that there are many other systems, reviewed in BIB004 , that can convert general texts into high-level graphical representations; e.g., text-to-scene and text-toanimation systems. In this work, we focus on text-to-picture systems, approaches, and tools, which is the simplest form of visualization, and we review those which have only been published in scientific journals and at conferences. Online tools and applications are totally out of the scope of this survey. In the next section, we present a detailed overview of state-of-the-art text-to-picture systems. For each one, we will elaborate on the system inputs and outputs, design methodology, language processes, and knowledge resources, as well as discuss the advantages and disadvantages.
|
Text-to-picture tools, systems, and approaches: a survey <s> Story picturing engine <s> Content-based image retrieval using region segmentation has been an active research area. We present IRM (Integrated Region Matching), a novel similarity measure for region-based image similarity comparison. The targeted image retrieval systems represent an image by a set of regions, roughly corresponding to objects, which are characterized by features reflecting color, texture, shape, and location properties. The IRM measure for evaluating overall similarity between images incorporates properties of all the regions in the images by a region-matching scheme. Compared with retrieval based on individual regions, the overall similarity approach reduces the influence of inaccurate segmentation, helps to clarify the semantics of a particular region, and enables a simple querying interface for region-based image retrieval systems. The IRM has been implemented as a part of our experimental SIMPLIcity image retrieval system. The application to a database of about 200,000 general-purpose images shows exceptional robustness to image alterations such as intensity variation, sharpness variation, color distortions, shape distortions, cropping, shifting, and rotation. Compared with several existing systems, our system in general achieves more accurate retrieval at higher speed. <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> Story picturing engine <s> In this paper, we present an approach towards automated story picturing based on mutual reinforcement principle. Story picturing refers to the process of illustrating a story with suitable pictures. In our approach, semantic keywords are extracted from the story text and an annotated image database is searched to form an initial picture pool. Thereafter, a novel image ranking scheme automatically determines the importance of each image. Both lexical annotations and visual content of an image play a role in determining its rank. Annotations are processed using the Wordnet to derive a lexical signature for each image. An integrated region based similarity is also calculated between each pair of images. An overall similarity measure is formed using lexical and visual features. In the end, a mutual reinforcement based rank is calculated for each image using the image similarity matrix. We also present a human behavior model based on a discrete state Markov process which captures the intuition for our technique. Experimental results have demonstrated the effectiveness of our scheme <s> BIB002 </s> Text-to-picture tools, systems, and approaches: a survey <s> Story picturing engine <s> We present an unsupervised approach to automated story picturing. Semantic keywords are extracted from the story, an annotated image database is searched. Thereafter, a novel image ranking scheme automatically determines the importance of each image. Both lexical annotations and visual content play a role in determining the ranks. Annotations are processed using the Wordnet. A mutual reinforcement-based rank is calculated for each image. We have implemented the methods in our Story Picturing Engine (SPE) system. Experiments on large-scale image databases are reported. A user study has been performed and statistical analysis of the results has been presented. <s> BIB003
|
The story picturing engine refers to the process of illustrating a story with suitable pictures BIB003 . The system is a pipeline of three processes: story processing and image selection, estimation of similarity, and reinforcement-based ranking. During the first process, some descriptor keywords and proper nouns are extracted from the story to estimate a lexical similarity between keywords using WordNet. For this purpose, the stop words are eliminated using a manually crafted dictionary, and then a subset of the remaining words is selected based on a combination of a bag-of-words model and named-entity recognition. The utilized bag-ofwords model uses WordNet 1 to determine the polysemy count of the words. Among them, nouns, adjectives, adverbs, and verbs with a low polysemy count (i.e., less ambiguity) are selected as descriptor keywords of a piece of text. Those with very high polysemy are eliminated because they offer little weight to the meaning conveyed by the story BIB002 . A simple named-entity recognizer is then used to extract the proper nouns. Those images that contain at least one keyword and one named entity are retrieved from a local, annotated image database that has been set as an initial image pool. The estimation of similarity between pairs of images based on their visual and lexical features is calculated based on a linear combination of integrated region matching distance BIB001 and WordNet hierarchy. Two forms of similarity measurement have been applied to consider visually similar images as well as images judged similar by annotations. Eventually, the images are ranked based on a mutual reinforcement method and the most highly ranked images are retrieved. This system is basically an image search engine that gets a given description as a query and retrieves and ranks the related images. Fig. 2 shows an example output for a story picturing engine. Despite the good accuracy and performance of the story picturing engine, it only retrieves one picture for a given story and ignores many aspects such as temporal or spatial relationships. More advanced language processing techniques can be incorporated into the story picturing engine for richer performance; for instance, by integrating several image databases and building an online system that can accept stories provided by teachers and students BIB003 .
|
Text-to-picture tools, systems, and approaches: a survey <s> Text-to-picture synthesis system <s> In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications. <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> Text-to-picture synthesis system <s> Pictorial communication systems convert natural language text into pictures to assist people with limited literacy. We define a novel and challenging problem: picture layout optimization. Given an input sentence, we seek the optimal way to lay out word icons such that the resulting picture best conveys the meaning of the input sentence. To this end, we propose a family of intuitive "ABC" layouts, which organize icons in three groups. We formalize layout optimization as a sequence labeling problem, employing conditional random fields as our machine learning method. Enabled by novel applications of semantic role labeling and syntactic parsing, our trained model makes layout predictions that agree well with human annotators. In addition, we conduct a user study to compare our ABC layout versus the standard linear layout. The study shows that our semantically enhanced layout is preferred by non-native speakers, suggesting it has the potential to be useful for people with other forms of limited literacy, too. <s> BIB002 </s> Text-to-picture tools, systems, and approaches: a survey <s> Text-to-picture synthesis system <s> A natural language interface exploits the conceptual simplicity and naturalness of the language to create a high-level user-friendly communication channel between humans and machines. One of the promising applications of such interfaces is generating visual interpretations of semantic content of a given natural language that can be then visualized either as a static scene or a dynamic animation. This survey discusses requirements and challenges of developing such systems and reports 26 graphical systems that exploit natural language interfaces and addresses both artificial intelligence and visualization aspects. This work serves as a frame of reference to researchers and to enable further advances in the field. <s> BIB003
|
This system is a general-purpose text-to-picture system attempting to enhance communication. This system evolved and used semantic role labeling for its latest version rather than keyword extraction with picturability, which measures the probability of finding a good image to represent the word . Initially, the system starts with key phrase extraction to eliminate the stop words and then uses a Part of Speech POS tagger to extract the nouns, proper nouns, and adjectives. These words are then fed to a logistic regression model to decide the probability of their picturability based on the ratio of the frequencies under a regular Web search versus an image search. A TextRank summarization algorithm BIB001 is applied to the computed probabilities, and the top 20 keywords are selected and used to form the key phrases, each having an assigned importance score. For image selection, the process is based on matching the extracted key phrases with the image annotations. First, the top 15 images for this key phrase are retrieved using Google Image Search. Next, each image is segmented into a set of disjointed regions using an image segmentation algorithm. Then, a vector of color features is calculated for all images and clustered in the feature space. Finally, the largest cluster is searched to find the region whose feature vector is closest to the center of this cluster. The image that contains this region is then selected as the best image for this key phrase. In the final stage, the system takes the text, the key phrases, and their associated images, and determines a 2D spatial layout that represents the meaning of the text by revealing the important objects and their relationships (see Fig. 3 ). The retrieved pictures are positioned based on three constraints: minimum overlap, centrality of important pictures, and closeness of the pictures in terms of the closeness of their associated key phrases. For that reason, the authors designed a so-called ABC layout, such that each word and its associated image is tagged as being in the A, B, or C region using a linear-chain conditional random field . In contrast to the story picturing engine, this system associates a different picture with each extracted key phrase and presents the story as a sequence of related pictures. It treats the textto-picture conversion problem as an optimization process, as mentioned in BIB003 , and it still inherits the drawbacks of text-to-picture systems despite its performance compared to the story picturing engine. For complex sentences, the authors anticipate the use of text simplification to convert complex text into a set of appropriate inputs for their system. According to BIB003 , the simplicity and the restriction to simple sentences may have prevented the system from reaching its goal because some elaborated steps possibly distort the meaning of the text. Moreover, the use of hand-drawn action icons only for visualization makes the system very restricted. This is where the true value of modern text-to-scene systems can be seen more efficiently, according to BIB003 . BIB002 , and for the sentence, BThe girl called the king a frog^(right)
|
Text-to-picture tools, systems, and approaches: a survey <s> Enriching textbooks with images <s> Textbooks have a direct bearing on the quality of education imparted to the students. Therefore, it is of paramount importance that the educational content of textbooks should provide rich learning experience to the students. Recent studies on understanding learning behavior suggest that the incorporation of digital visual material can greatly enhance learning. However, textbooks used in many developing regions are largely text-oriented and lack good visual material. We propose techniques for finding images from the web that are most relevant for augmenting a section of the textbook, while respecting the constraint that the same image is not repeated in different sections of the same chapter. We devise a rigorous formulation of the image assignment problem and present a polynomial time algorithm for solving the problem optimally. We also present two image mining algorithms that utilize orthogonal signals and hence obtain different sets of relevant images. Finally, we provide an ensembling algorithm for combining the assignments. To empirically evaluate our techniques, we use a corpus of high school textbooks in use in India. Our user study utilizing the Amazon Mechanical Turk platform indicates that the proposed techniques are able to obtain images that can help increase the understanding of the textbook material. <s> BIB001
|
This approach proposes techniques for finding images from the Web that are most relevant for augmenting a section of the textbook while also respecting the constraint that the same image is not repeated in different sections of the same chapter BIB001 . The techniques are comprised of optimizing the assignment of images to different sections within a chapter, mining images from the Web using multiple algorithms, and, finally, Bensembling^them. Upon image assignment, each section of the textbook is assigned the most relevant images such that the relevance score for the chapter is maximized while maintaining the constraints that no section has been assigned more than a certain maximum number of images (each section is augmented with at most k number of images) and no image is used more than once in the chapter (no image repeats across sections). A polynomial time algorithm is also used for implementing the optimizer. For image mining, two algorithms are used for obtaining a ranked list of the top-k images, where k is the number of images, and their relevance scores for a given section where various possible variants of these algorithms are accepted, as well as additional image-mining algorithms that could be produced. The relevance score for an image is computed by analyzing the overlap between the concept phrases and the image metadata. The ranked lists of image assignments are then aggregated by image ensembling in order to produce the final result. Ensembling is done sequentially within a chapter, starting from the first section. Top images selected for a section are eliminated from the pool of available images for the remaining sections. The image assignment is then rerun, followed by ensembling for the next section. 5 Example output for the section on BDispersion of white light by a glass prism^ BIB001 The evaluation conducted through the Amazon Mechanical Turk platform showed promising results for the proposed system and indicated that the proposed techniques are able to obtain images that can help increase the understanding of the textbook material; however, deeper analysis to identify key concepts is needed. Despite its promises, the concepts in this work are divided into the two parts-of-speech categories: adjectives, nouns, and sometimes prepositions, thus ignoring all other categories of terms. Moreover, the proposed system does not support interactive modification of the images and does not embed any lexical or commonsense resources.
|
Text-to-picture tools, systems, and approaches: a survey <s> Illustrate it! An Arabic multimedia text-to-picture m-learning system <s> We outline the design of a visualizer, named Vishit, for texts in the Hindi language. The Hindi language is lingua franca in many states of India where people speak different languages. The visualized text serves as a universal language where seamless communication is needed by many people who speak different languages and have different cultures. Vishit consists of the following three major processing steps: language processing, knowledge base creation and scene generation. Initial results from the Vishit prototype are encouraging. <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> Illustrate it! An Arabic multimedia text-to-picture m-learning system <s> Multimedia learning is the process of building mental representation from words associated with images. Due to the intuitiveness and vividness of visual illustration, many texts to picture systems have been proposed. However, we observe some common limitations in the existing systems, such as the retrieved pictures may not be suitable for educational purposes. Also, finding pedagogic illustrations still requires manual work, which is difficult and time-consuming. The commonly used systems based on the best keyword selection and the best sentence selection may suffer from loss of information. In this paper, we present an Arabic multimedia text-to-picture mobile learning system that is based on conceptual graph matching. Using a knowledge base, a conceptual graph is built from the text accompanied with the pictures in the multimedia repository as well as for the text entered by the user. Based on the matching scores of both conceptual graphs, matched pictures are assigned relative rankings. The proposed system demonstrated its effectiveness in the domain of Arabic stories, however, it can be easily shifted to any educational domain to yield pedagogical illustrations for organizational or institutional needs. Comparisons with the current state-of-the-art systems, based on the best keyword selection and the best sentence selection techniques, have demonstrated significant improvements in the performance. In addition, to facilitate educational needs, conceptual graph visualization and visual illustrative assessment modules are also developed. The conceptual graph visualization enables learners to discover relationships between words, and the visual illustrative assessment allows the system to automatically assess the performance of a learner. The profound user studies demonstrated the efficiency of the proposed multimedia learning system. <s> BIB002
|
Illustrate It! is an Arabic multimedia text-to-picture mobile learning system that is based on conceptual graph matching BIB002 . To build a multimedia repository, the system uses the Scribd 4 online book library in order to collect educational stories which are then stored locally in binary format and marked for text extraction. An educational ontology is built to provide educational resources covering different domains such as the domain of animals' stories, in particular it describes the story's structure, the question's semantic structure and the grammatical tree structure. Fig. 7 Example output for the message BGood idea^ BIB001 For text processing, relationships between entities in the story text are extracted using a basic formal concept analysis approach based on the composition of entity-property matrices. A conceptual graph for the best selected sentence and for the best selected keywords is also built. The obtained graph is used to select the best picture based on the maximum intersection between this graph and the conceptual graphs of the pictures in the multimedia repository. The proposed system solves the limitations of existing systems by providing pedagogic illustrations. However, the current implementation cannot automatically find illustrations that do not have annotations or textual content, and cannot locate annotated elements in the picture. Moreover, the system uses only cartoon images and disregards essential educational benefits from other multimedia types, only focuses on entities and relationships, and ignores spatial and temporal relationships and other story clues that can be used to infer the implicit knowledge.
|
Text-to-picture tools, systems, and approaches: a survey <s> WordsEye <s> Natural language is an easy and effective medium for describing visual ideas and mental images. Thus, we foresee the emergence of language-based 3D scene generation systems to let ordinary users quickly create 3D scenes without having to learn special software, acquire artistic skills, or even touch a desktop window-oriented interface. WordsEye is such a system for automatically converting text into representative 3D scenes. WordsEye relies on a large database of 3D models and poses to depict entities and actions. Every 3D model can have associated shape displacements, spatial tags, and functional properties to be used in the depiction process. We describe the linguistic analysis and depiction techniques used by WordsEye along with some general strategies by which more abstract concepts are made depictable. <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> WordsEye <s> A natural language interface exploits the conceptual simplicity and naturalness of the language to create a high-level user-friendly communication channel between humans and machines. One of the promising applications of such interfaces is generating visual interpretations of semantic content of a given natural language that can be then visualized either as a static scene or a dynamic animation. This survey discusses requirements and challenges of developing such systems and reports 26 graphical systems that exploit natural language interfaces and addresses both artificial intelligence and visualization aspects. This work serves as a frame of reference to researchers and to enable further advances in the field. <s> BIB002
|
WordsEye is a text-to-scene system that can automatically convert input text into representative, static, 3D scenes BIB001 . The system consists of two main components: a linguistic analyzer and a scene depicter. First, the input text that can include information about actions, spatial relationships, and object attributes is parsed, and a dependency structure is constructed that represents the dependencies among the words to facilitate the semantic analysis. This structure is then utilized to construct a semantic representation in which objects, actions, and relationships are represented in terms of semantic frames. Then, the depiction module converts the semantic frames into a set of low-level graphical specifications. For this purpose, a set of depiction rules is used to convert the objects, actions, relationships, and attributes from the extracted semantic representation to their realizable visual counterparts. The geometric information of the objects is manually tagged and attached to the 3D models. This component also employs a set of transduction rules to add implicit constraints and resolve conflicting constraints. Finally, once the layout is completed, the static scene is rendered using OpenGL similar to the example output shown in Fig. 8 . Although WordsEye has achieved a good degree of success, the allowed input language for describing the scenes is stilted, as mentioned in BIB002 . It is not interactive and does not exploit the user's feedback. Moreover, WordsEye relies on its huge offline rule base and data repositories containing different geometric shapes, types, and similar attributes. These elements are manually annotated, meaning that WordsEye lacks an automatic image annotation task.
|
Text-to-picture tools, systems, and approaches: a survey <s> Confucius <s> Various English verb classifications have been analyzed in terms of their syntactic and semantic properties, and conceptual components, such as syntactic valency, lexical semantics, and semantic/syntactic correlations. Here the visual semantics of verbs, particularly their visual roles, somatotopic effectors, and level-of-detail, is studied. We introduce the notion of visual valency and use it as a primary criterion to recategorize eventive verbs for language visualization (animation) in our intelligent multimodal storytelling system, CONFUCIUS. The visual valency approach is a framework for modelling deeper semantics of verbs. In our ontological system we consider both language and visual modalities since CONFUCIUS is a multimodal system. <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> Confucius <s> A natural language interface exploits the conceptual simplicity and naturalness of the language to create a high-level user-friendly communication channel between humans and machines. One of the promising applications of such interfaces is generating visual interpretations of semantic content of a given natural language that can be then visualized either as a static scene or a dynamic animation. This survey discusses requirements and challenges of developing such systems and reports 26 graphical systems that exploit natural language interfaces and addresses both artificial intelligence and visualization aspects. This work serves as a frame of reference to researchers and to enable further advances in the field. <s> BIB002
|
CONFUCIUS is a multi-modal text-to-animation conversion system that can generate animations from a single input sentence containing an action verb and synchronize it with speech . It is composed of several modules with different tasks to accomplish; we briefly mention the relevant modules: 1. Knowledge base: Encompasses a lexicon, a parser, and a visual database that contains a very limited set of 3D models and action animations. 2. Language processor: Uses a Connexor functional-dependency grammar parser, WordNet, and a lexical conceptual structure database to parse the input sentence, analyzes the semantics, and outputs lexical visual semantic representation. 3. Media allocator: Exploits the acquired semantics to generate an XML representation of three modalities: an animation engine, a speech engine, and narration. 4. Animation engine: Uses the generated XML representation and the visual database to generate 3D animations, including sound effects. 5. Text-to-speech engine: Uses the XML representation to generate speech. 6. Story narrator: Uses the XML representation to initialize the narrator agent. 7. Synchronizer: Integrates these modalities into a virtual reality modelling language file that is later used to render the animation. CONFUCIUS can address the temporal relationships between actions. It integrates the notion of visual valency BIB001 , a framework for deeper semantics of verbs, and uses it as a primary criterion to re-categorize eventive verbs for the animation. It utilizes the humanoid animation standard for modeling and animating the virtual humans. As seen in Fig. 9 , CONFUCIUS supports lip synchronization, facial expressions, and parallel animation of the upper and lower body of human models. Fig. 9 The output animation of BJohn put a cup on the table^ Generally, CONFUCIUS is not interactive in the sense that it does not let the user modify the generated animation BIB002 . In addition, only a single sentence is allowed in each input and only a restricted format of the input sentences (i.e., one action verb per sentence, and only simple verbs are considered) is permitted, hence the user is restricted in expressing the intended description.
|
Text-to-picture tools, systems, and approaches: a survey <s> Discussion <s> A natural language interface exploits the conceptual simplicity and naturalness of the language to create a high-level user-friendly communication channel between humans and machines. One of the promising applications of such interfaces is generating visual interpretations of semantic content of a given natural language that can be then visualized either as a static scene or a dynamic animation. This survey discusses requirements and challenges of developing such systems and reports 26 graphical systems that exploit natural language interfaces and addresses both artificial intelligence and visualization aspects. This work serves as a frame of reference to researchers and to enable further advances in the field. <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> Discussion <s> We describe the investigation of automatic annotation of text with pictures, where knowledge extraction uses dependency parsing. Annotation of text with pictures, a form of knowledge visualization, can assist understanding. The problem statement is, given a corpus of images and a short passage of text, extract knowledge (or concepts), and then display that knowledge in pictures along with the text to help with understanding. A proposed solution framework includes a component to extract document concepts, a component to match document concepts with picture metadata, and a component to produce an amalgamated output of text and pictures. A proof-of-concept application based on the proposed framework provides encouraging results <s> BIB002 </s> Text-to-picture tools, systems, and approaches: a survey <s> Discussion <s> The vast array of information available on the Web makes it a challenge for readers to quickly browse through and decide about the importance and relevance of content. Interpreting large-volumes of data is particularly demanding for users with handheld devices in the social media and micro-blogging sphere. Various approaches address this challenge through text summarization, content ranking and personalized recommendation. We describe a family of techniques that help users understand text by automatically annotating text with pictures, referred to as text picturing. The objective is to find a set of pictures that cover the main concepts in a textual snippet. We provide an overview of text picturing, its constituent steps such as knowledge extraction, mapping, scene rendering, as well as application areas. We give a picturing-related literature overview, and list use-cases that offer IT professionals insight into how picturing techniques can be successfully incorporated into real world applications. <s> BIB003
|
The reviewed text-to-picture systems treat the problem of mapping natural language descriptions to a visual representation as an image retrieval and ranking problem BIB001 . The authors in BIB002 see the problem from another perspective; namely, as a knowledge extraction and translation problem. Together, these systems extract concepts from the input text, then match them against the image annotations, and a subset of the images is retrieved and ranked based on some predefined similarity measures. The retrieved images with the highest rank are illustrated. From a detailed literature review, we see several attempts at illustrating text with pictures to help with better understanding and communication. Literature also shows efforts to translate text to a picture, text picturing, natural language visualization, etc. Hence the common features for most text-to-picture systems and approaches include the following BIB003 , which are also illustrated in Fig. 10
|
Text-to-picture tools, systems, and approaches: a survey <s> Knowledge Match <s> Natural language is an easy and effective medium for describing visual ideas and mental images. Thus, we foresee the emergence of language-based 3D scene generation systems to let ordinary users quickly create 3D scenes without having to learn special software, acquire artistic skills, or even touch a desktop window-oriented interface. WordsEye is such a system for automatically converting text into representative 3D scenes. WordsEye relies on a large database of 3D models and poses to depict entities and actions. Every 3D model can have associated shape displacements, spatial tags, and functional properties to be used in the depiction process. We describe the linguistic analysis and depiction techniques used by WordsEye along with some general strategies by which more abstract concepts are made depictable. <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> Knowledge Match <s> A natural language interface exploits the conceptual simplicity and naturalness of the language to create a high-level user-friendly communication channel between humans and machines. One of the promising applications of such interfaces is generating visual interpretations of semantic content of a given natural language that can be then visualized either as a static scene or a dynamic animation. This survey discusses requirements and challenges of developing such systems and reports 26 graphical systems that exploit natural language interfaces and addresses both artificial intelligence and visualization aspects. This work serves as a frame of reference to researchers and to enable further advances in the field. <s> BIB002
|
• Match of text and picture metadata • Rendering single image or collage of images Fig. 10 Text and image processing pipeline features fused with semantic feature associations instead of text and image annotations. Second, the systems have evolved in their produced output such that the early systems provide only one representative picture, whereas successor systems provide a set of images ordered based on the temporal flow of the input. Table 1 below gives an overall comparison focusing on NLP, NLU, input, and output modalities of the reviewed real and functioning text-to-picture systems only. The plus/minus (± ) signs indicate the support each system has for features listed in the table header. As the table indicates, WordsEye is the only system that has a good NLU component; however, the allowed input language for describing the scenes is stilted BIB002 . The other systems, which have more enriched input and output interfaces, have weaker NLP, and all completely lack NLU. Many other features are shown in the following tables for a comparison between the reviewed text-to-picture approaches and systems. We focus on text analysis models used by these systems, thus categorizing them into two groups as systems either following shallow semantic analysis or deep semantic analysis. As shown in Table 2 , systems that use shallow semantic analysis models typically provide a naïve semantic parsing such as semantic role labeling. However, systems that use deep semantic analysis or linguistic approaches investigate deeper semantic parsing such as dependency parsing and semantic parsing (see Table 3 ). First of all, we summarize the technical details, text processing, and rendering features of reviewed prior work with the following criteria: WordNet) and help to add related context to the input; image resources determine the visual resources (e.g., Google images or Flickr) and enable automated image retrieval. + --+ Utkus + -+ -WordsEye BIB001 + + -+ CONFUCIUS + --+ Table 2 Comparison of text-to-picture systems following shallow semantic analysis
|
Text-to-picture tools, systems, and approaches: a survey <s> Remarks and findings obtained <s> Natural language is an easy and effective medium for describing visual ideas and mental images. Thus, we foresee the emergence of language-based 3D scene generation systems to let ordinary users quickly create 3D scenes without having to learn special software, acquire artistic skills, or even touch a desktop window-oriented interface. WordsEye is such a system for automatically converting text into representative 3D scenes. WordsEye relies on a large database of 3D models and poses to depict entities and actions. Every 3D model can have associated shape displacements, spatial tags, and functional properties to be used in the depiction process. We describe the linguistic analysis and depiction techniques used by WordsEye along with some general strategies by which more abstract concepts are made depictable. <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> Remarks and findings obtained <s> We present an unsupervised approach to automated story picturing. Semantic keywords are extracted from the story, an annotated image database is searched. Thereafter, a novel image ranking scheme automatically determines the importance of each image. Both lexical annotations and visual content play a role in determining the ranks. Annotations are processed using the Wordnet. A mutual reinforcement-based rank is calculated for each image. We have implemented the methods in our Story Picturing Engine (SPE) system. Experiments on large-scale image databases are reported. A user study has been performed and statistical analysis of the results has been presented. <s> BIB002 </s> Text-to-picture tools, systems, and approaches: a survey <s> Remarks and findings obtained <s> Our proposed software system, SceneMaker, aims to facilitate the production of plays, films or animations by automatically interpreting natural language film scripts and generating multimodal, animated scenes from them. During the generation of the story content, SceneMaker will give particular attention to emotional aspects and their reflection in fluency and manner of actions, body posture, facial expressions, speech, scene composition, timing, lighting, music and camera work. Related literature and software on Natural Language Processing, in particular textual affect sensing, affective embodied agents, visualisation of 3D scenes and digital cinematography are reviewed. In relation to other work, SceneMaker will present a genre-specific text-to-animation methodology which combines all relevant expressive modalities. In conclusion, SceneMaker will enhance the communication of creative ideas providing quick pre-visualisations of scenes. <s> BIB003 </s> Text-to-picture tools, systems, and approaches: a survey <s> Remarks and findings obtained <s> Textbooks have a direct bearing on the quality of education imparted to the students. Therefore, it is of paramount importance that the educational content of textbooks should provide rich learning experience to the students. Recent studies on understanding learning behavior suggest that the incorporation of digital visual material can greatly enhance learning. However, textbooks used in many developing regions are largely text-oriented and lack good visual material. We propose techniques for finding images from the web that are most relevant for augmenting a section of the textbook, while respecting the constraint that the same image is not repeated in different sections of the same chapter. We devise a rigorous formulation of the image assignment problem and present a polynomial time algorithm for solving the problem optimally. We also present two image mining algorithms that utilize orthogonal signals and hence obtain different sets of relevant images. Finally, we provide an ensembling algorithm for combining the assignments. To empirically evaluate our techniques, we use a corpus of high school textbooks in use in India. Our user study utilizing the Amazon Mechanical Turk platform indicates that the proposed techniques are able to obtain images that can help increase the understanding of the textbook material. <s> BIB004 </s> Text-to-picture tools, systems, and approaches: a survey <s> Remarks and findings obtained <s> We outline the design of a visualizer, named Vishit, for texts in the Hindi language. The Hindi language is lingua franca in many states of India where people speak different languages. The visualized text serves as a universal language where seamless communication is needed by many people who speak different languages and have different cultures. Vishit consists of the following three major processing steps: language processing, knowledge base creation and scene generation. Initial results from the Vishit prototype are encouraging. <s> BIB005 </s> Text-to-picture tools, systems, and approaches: a survey <s> Remarks and findings obtained <s> Instant messaging service is an important aspect of social media and sprung up in last decades. Traditional instant messaging service transfers information mainly based on textual message, while the visual message is ignored to a great extent. Such instant messaging service is thus far from satisfactory in all-around information communication. In this paper, we propose a novel visual assisted instant messaging scheme named Chat with illustration (CWI), which presents users visual messages associated with textual message automatically. When users start their chat, the system first identifies meaningful keywords from dialogue content and analyzes grammatical and logical relations. Then CWI explores keyword-based image search on a hierarchically clustering image database which is built offline. Finally, according to grammatical and logical relations, CWI assembles these images properly and presents an optimal visual message. With the combination of textual and visual message, users could get a more interesting and vivid communication experience. Especially for different native language speakers, CWI can help them cross language barrier to some degree. In addition, a visual dialogue summarization is also proposed, which help users recall the past dialogue. The in-depth user studies demonstrate the effectiveness of our visual assisted instant messaging scheme. <s> BIB006 </s> Text-to-picture tools, systems, and approaches: a survey <s> Remarks and findings obtained <s> A natural language interface exploits the conceptual simplicity and naturalness of the language to create a high-level user-friendly communication channel between humans and machines. One of the promising applications of such interfaces is generating visual interpretations of semantic content of a given natural language that can be then visualized either as a static scene or a dynamic animation. This survey discusses requirements and challenges of developing such systems and reports 26 graphical systems that exploit natural language interfaces and addresses both artificial intelligence and visualization aspects. This work serves as a frame of reference to researchers and to enable further advances in the field. <s> BIB007 </s> Text-to-picture tools, systems, and approaches: a survey <s> Remarks and findings obtained <s> The vast array of information available on the Web makes it a challenge for readers to quickly browse through and decide about the importance and relevance of content. Interpreting large-volumes of data is particularly demanding for users with handheld devices in the social media and micro-blogging sphere. Various approaches address this challenge through text summarization, content ranking and personalized recommendation. We describe a family of techniques that help users understand text by automatically annotating text with pictures, referred to as text picturing. The objective is to find a set of pictures that cover the main concepts in a textual snippet. We provide an overview of text picturing, its constituent steps such as knowledge extraction, mapping, scene rendering, as well as application areas. We give a picturing-related literature overview, and list use-cases that offer IT professionals insight into how picturing techniques can be successfully incorporated into real world applications. <s> BIB008 </s> Text-to-picture tools, systems, and approaches: a survey <s> Remarks and findings obtained <s> We describe the investigation of automatic annotation of text with pictures, where knowledge extraction uses dependency parsing. Annotation of text with pictures, a form of knowledge visualization, can assist understanding. The problem statement is, given a corpus of images and a short passage of text, extract knowledge (or concepts), and then display that knowledge in pictures along with the text to help with understanding. A proposed solution framework includes a component to extract document concepts, a component to match document concepts with picture metadata, and a component to produce an amalgamated output of text and pictures. A proof-of-concept application based on the proposed framework provides encouraging results <s> BIB009 </s> Text-to-picture tools, systems, and approaches: a survey <s> Remarks and findings obtained <s> Multimedia learning is the process of building mental representation from words associated with images. Due to the intuitiveness and vividness of visual illustration, many texts to picture systems have been proposed. However, we observe some common limitations in the existing systems, such as the retrieved pictures may not be suitable for educational purposes. Also, finding pedagogic illustrations still requires manual work, which is difficult and time-consuming. The commonly used systems based on the best keyword selection and the best sentence selection may suffer from loss of information. In this paper, we present an Arabic multimedia text-to-picture mobile learning system that is based on conceptual graph matching. Using a knowledge base, a conceptual graph is built from the text accompanied with the pictures in the multimedia repository as well as for the text entered by the user. Based on the matching scores of both conceptual graphs, matched pictures are assigned relative rankings. The proposed system demonstrated its effectiveness in the domain of Arabic stories, however, it can be easily shifted to any educational domain to yield pedagogical illustrations for organizational or institutional needs. Comparisons with the current state-of-the-art systems, based on the best keyword selection and the best sentence selection techniques, have demonstrated significant improvements in the performance. In addition, to facilitate educational needs, conceptual graph visualization and visual illustrative assessment modules are also developed. The conceptual graph visualization enables learners to discover relationships between words, and the visual illustrative assessment allows the system to automatically assess the performance of a learner. The profound user studies demonstrated the efficiency of the proposed multimedia learning system. <s> BIB010
|
Many reviews such as BIB007 BIB008 BIB009 , and research works such as on text-to-picture systems highlight the following issues: 1. Natural language understanding: Most of the systems face many technical difficulties in understanding natural language. Therefore, they restrict the form of the input text to overcome these difficulties (e.g., one sentence in simple English is allowed as the input for a text-to-picture synthesizer) . Other approaches restrict the conceptual domain to a specific domain (e.g., Vishit BIB005 restricts the conceptual domain to the domain of animals in different environments). 2. Natural language processing: Most of the systems focus on the information retrieval task and do not elaborate on the language processing aspects, including morphological, syntactic, and semantic analyses. However, in terms of language understanding and richness of the model repository, WordsEye BIB001 outperforms all reviewed systems. 3. Syntax analysis: Half of the systems use the bag-of-words representation model that treats a given text as a set of words and frequencies and disregards the syntax and word order. The rest of the reviewed systems utilize POS tagging. However, most systems do not analyze all words; some focus on just one or two parts of speech (e.g., the approach in BIB004 considers only nouns and adjectives). 8. Rule-based: Few systems use rule-based methodology; however, current data-driven systems do not outperform the rule-based systems BIB007 . This is probably because the data-driven systems have only been used for feasibility studies, whereas a few rule-based systems such as WordsEye are commercialized and supported by the required resources for crafting as many rules as possible. 9. Input/Output: In terms of inputs, only a few systems allow for a general, unrestricted natural language (e.g., WordsEye). On the other hand, systems have evolved in terms of output. The early systems provided the users with only one representative picture, as described in BIB002 , whereas later systems have provided users with a set of images based on their relevance and have also provided an appropriate layout. More sophisticated outputs in the form of 3D animations with sound effects and displayed emotions are also available, as described in BIB003 . 10. External text resources: Most of the systems used the WordNet lexicon as a typical text knowledge source in earlier works. However, a large proportion of the general-domain systems that require common-sense knowledge are not equipped with any knowledge resources. This fact highlights another fundamental problem of the current systems. They simply ignore the knowledge resources, meaning that they cannot infer in unpredictable situations and cannot be adaptive. 11. External image resource: Most of the systems rely on third-party image collections such as Flickr, while only a few rely on their own offline image resource with excessive preprocessing stages that include backgrounds and frames (e.g., CWI BIB006 relies on making excessive preparations of image resources). The visualization within this system is restricted to available images within that resource. 12. Image annotation: Most of the systems exploit the surrounding text of the images and the text appearing within HTML tags. Some of the systems apply an automatic annotation by collecting both images and text and then using the co-occurring text around the images to annotate them. On the other hand, there are other techniques attempting to extract the text within the image (e.g., Illustrate It! BIB010 transforms the image into a binary format and employs the library Tess4J 6 for optical character recognition to transform the textual content in the image into characters that are exploited for matching relevant images). 13. Image retrieval process: Most of the systems carry out this process by extracting concepts from the input text and then matching them against the image annotations, after which a subset of images is retrieved and ranked for a given concept based on some predefined similarity measures. In some systems, the retrieved images with the highest rank are then illustrated based on an image layout algorithm. 14. Image layout: Most of the systems devote significant effort to image selection, image clustering, and image layout (e.g., CWI BIB006 applies several image layout templates to cover grammatical relationships in a dialogue). 15. Semantic Web: Resources of the Semantic Web are not used, except for ontologies. . Interactivity: Most of the systems are not interactive because they lack a solid mechanism to harvest the information from user interactions and feedback. 17. Adaptivity: Few systems are adaptive and most of these systems also ignore a priori knowledge provided by experts or other resources. Hence, the literature shows that successful text-to-picture systems have good language understanding components, but also have fewer input channels and less intuitive visual layouts in terms of output. Contrary multimodal systems have more enriched input/output interfaces and better graphics quality, but they suffer from weaker NLP, as some of them simply ignore NLP completely, as mentioned in . Overall, we have identified three main problems with the current systems. The first problem is associated with NLU, since these systems cannot capture the deep semantics embedded within the natural language descriptions. The second problem is related to visualization, which is, in turn, restricted to available images. The third problem is rooted in the fact that the current systems lack the available resources (e.g., lexicons) and the available techniques to manage and integrate open source datasets in real time.
|
Text-to-picture tools, systems, and approaches: a survey <s> Effectiveness of the survey <s> We present an unsupervised approach to automated story picturing. Semantic keywords are extracted from the story, an annotated image database is searched. Thereafter, a novel image ranking scheme automatically determines the importance of each image. Both lexical annotations and visual content play a role in determining the ranks. Annotations are processed using the Wordnet. A mutual reinforcement-based rank is calculated for each image. We have implemented the methods in our Story Picturing Engine (SPE) system. Experiments on large-scale image databases are reported. A user study has been performed and statistical analysis of the results has been presented. <s> BIB001 </s> Text-to-picture tools, systems, and approaches: a survey <s> Effectiveness of the survey <s> A natural language interface exploits the conceptual simplicity and naturalness of the language to create a high-level user-friendly communication channel between humans and machines. One of the promising applications of such interfaces is generating visual interpretations of semantic content of a given natural language that can be then visualized either as a static scene or a dynamic animation. This survey discusses requirements and challenges of developing such systems and reports 26 graphical systems that exploit natural language interfaces and addresses both artificial intelligence and visualization aspects. This work serves as a frame of reference to researchers and to enable further advances in the field. <s> BIB002 </s> Text-to-picture tools, systems, and approaches: a survey <s> Effectiveness of the survey <s> The vast array of information available on the Web makes it a challenge for readers to quickly browse through and decide about the importance and relevance of content. Interpreting large-volumes of data is particularly demanding for users with handheld devices in the social media and micro-blogging sphere. Various approaches address this challenge through text summarization, content ranking and personalized recommendation. We describe a family of techniques that help users understand text by automatically annotating text with pictures, referred to as text picturing. The objective is to find a set of pictures that cover the main concepts in a textual snippet. We provide an overview of text picturing, its constituent steps such as knowledge extraction, mapping, scene rendering, as well as application areas. We give a picturing-related literature overview, and list use-cases that offer IT professionals insight into how picturing techniques can be successfully incorporated into real world applications. <s> BIB003 </s> Text-to-picture tools, systems, and approaches: a survey <s> Effectiveness of the survey <s> Increasingly sophisticated methods for data processing demand knowledge on the semantic relationship between language and vision. New fields of research like Explainable AI demand to step away from black-boxed approaches and understanding how the underlying semantics of data sets and AI models work. Advancements in Psycholinguistics suggest, that there is a relationship from language perception to how language production and sentence creation work. In this paper, a method to measure the visual variety of concepts is proposed to quantify the semantic gap between vision and language. For this, an image corpus is recomposed using ImageNet and Web data. Web-based metrics for measuring the popularity of sub-concepts are used as a weighting to ensure that the image composition in a dataset is as natural as possible. Using clustering methods, a score describing the visual variety of each concept is determined. A crowd-sourced survey is conducted to create ground-truth values applicable for this research. The evaluations show that the recomposed image corpus largely improves the measured variety compared to previous datasets. The results are promising and give additional knowledge about the relationship of language and vision. <s> BIB004
|
To the best of our knowledge, this survey is one of few reviews BIB002 BIB003 of text-to-picture systems and approaches associated with illustrating natural language. This survey has been carried out to derive the feasibility and the outcome of illustrating the Arabic language as a proof of concept. This work has presented the main problems faced by text-to-picture systems with respect to NLP, NLU, and many other requirements. For each reviewed system, we elaborated on the system's inputs and outputs, design methodology, language processes, and knowledge resources, as well as discussing the advantages and disadvantages. Many other features are shown in Table 2 and Table 3 for a clear comparison between the reviewed text-topicture approaches and systems. We focused on NLP and analysis models used by these systems, and thus categorized them into two groups. We concluded that systems following deep semantic analysis have higher accuracy compared to those following shallow semantic analysis. We have also shown some systems that have enriched input and output interfaces, but weaker NLP and NLU, and therefore weaker accuracy. This not only reflects the current technical difficulties in understanding natural language, but also showcases the semantic gap BIB004 between human perception and computer vision; i.e., semantic gaps between humans perceiving their surroundings and the computer analyzing datasets. Furthermore, the survey showed that there is no open dataset available for the purpose of illustrating natural language, or at least for common language concepts in general. Thus, in order to overcome the semantic gap, it is important to have a deep understanding of how a language's vocabulary and its visual representations connect. Whereas some text-to-picture systems rely on many filtering algorithms and techniques in order to get appropriate materials from Web image searches, other systems create their own multimedia datasets, which has revealed the excessive manual efforts behind these systems. In terms of input/output-modalities, early systems provided only one representative image (e.g. the story picturing engine BIB001 ), whereas recent systems provide a set of images (e.g. Word2Image ). In terms of spatial and temporal relationships, all reviewed text-to-picture systems were unable to address them; this is probably because these relationships can only be visualized through dynamic systems (e.g., animation). It should be noted that some of the reviewed systems are not available to date and are no longer enhanced (e.g., the story picturing engine BIB001 ). Ultimately, we have concluded that text-to-picture conversion systems will not significantly improve until the machine vision and language understanding methods are improved, as argued in BIB002 .
|
Asset Mapping as a Tool for Identifying Resources in Community Health: A Methodological Overview <s> Usefulness of an Asset Map <s> Abstract Background. Childhood obesity is an epidemic. Addressing this problem will require the input of many sectors and change in many behaviors. The “community” must be part of the solution, and the solution must be constructed on existing assets that lend strength to positive environmental change. Objective. To catalyze an established asset-based community partnership to support efforts to reduce television viewing time by developing and providing alternative activities as part of a broader, 3-year study to reduce childhood obesity among preschool-aged children in rural, upstate New York. Method. Asset mapping was utilized to compile an inventory of individual and community strengths upon which a partnership could be established. Facilitated focus group sessions were conducted to better understand childcare environmental policies and practices, and to guide changes conducive to health and fitness. Planning meetings and targeted outreach brought key stakeholders together for a community-participatory initiative to support positive environmental change. Results. The partnership planned and initiated an array of after-school and weekend community activities for preschool-aged children and their families in the weeks preceding, during, and following a designated ‘TV Turn-off’ week in April, 2004 and March, 2005. Conclusion. Methods of asset-based community development are an effective way to engage community participation in public health initiatives. <s> BIB001 </s> Asset Mapping as a Tool for Identifying Resources in Community Health: A Methodological Overview <s> Usefulness of an Asset Map <s> This literature review is a discussion of asset-based approaches to community engagement. Following a literature search, we identified several asset mapping approaches: Asset-Based Community Development (ABCD); Participatory Inquiry into Religious Health Assets, Networks and Agency (PIRHANA); Community Health Assets Mapping for Partnerships (CHAMP); the Sustainable Livelihoods Approach (SLA); Planning for Real® and approaches using Geographic Information Systems (GIS). These approaches are framed by assumptions about ‘assets’, ‘needs’, and ‘community’ and their associated community engagement methods that may be influenced by dynamics related to conflict, competition and language. We conclude that asset mapping approaches derive their value from their capacities to support partnership building, consensus creation, and community agency and control. <s> BIB002
|
Asset mapping, in general, refers to the comprehensive compilation of a list of assets and the further optional step of then locating and listing the assets on a map. As noted in both , an asset map can a "useful tool for assessing health-related needs, disparities, and inequities within communities." Just as the World Health Organization (WHO, 2018) shifted their definition of health from "the absence of disease or infirmity" to "the state of complete physical, mental, and social well-being", asset maps can help shift the culture of community health from that of assessing deficiencies to an approach that takes other factors into account as well. Through including other factors when addressing health, communities have "an opportunity to mobilize existing strengths and resources." BIB002 In the long run, this ability can help "build social capital to catalyze change" BIB001 to improve overall community and population health. The steps outlined in this article are adapted from the UCLA Centre for Health Policy and Research's guideline for creating an Asset Map (UCLA Centre for Health Policy Research, 2018) . This was chosen as a guide because of UCLA's well-respected Public Health Program and the simplicity and pragmatism with which this guide is presented. This paper will assist readers in discovering why and how an asset map can be valuable to projects and communities, identifying examples of the assets to be mapped and who will be mapping them, presenting a step-by-step process guide, and lastly applying the guide to construct an asset map.
|
Asset Mapping as a Tool for Identifying Resources in Community Health: A Methodological Overview <s> Examples of Asset Mapping in Health and Wellness <s> BACKGROUND ::: The Hispanic/Latino population in Forsyth County, North Carolina, is growing quickly and experiencing significant disparities in access to care and health outcomes. Assessing community perceptions and utilization of health care resources in order to improve health equity among Hispanics/Latinos at both the county and state levels is critical. ::: ::: ::: METHODS ::: Our community engagement process was guided by the Community Health Assets Mapping Partnerships (CHAMP) approach, which helps identify gaps in health care availability and areas for immediate action to improve access to and quality of health care. Specifically, we invited and encouraged the Hispanic/Latino population to participate in 4 different workshops conducted in Spanish or English. Participants were identified as either health care providers, defined as anyone who provides health care or a related service, or health care seekers, defined as anyone who utilizes such services. ::: ::: ::: RESULTS ::: The most commonly cited challenges to access to care were cost of health care, documentation status, lack of public transportation, racism, lack of care, lack of respect, and education/language. These data were utilized to drive continued engagement with the Hispanic community, and action steps were outlined. ::: ::: ::: LIMITATIONS ::: While participation in the workshops was acceptable, greater representation of health care seekers and community providers is needed. ::: ::: ::: CONCLUSIONS ::: This process is fundamental to multilevel initiatives under way to develop trust and improve relationships between the Hispanic/Latino community and local health care entities in Forsyth County. Follow-through on recommended action steps will continue to further identify disparities, close gaps in care, and potentially impact local and state policies with regard to improving the health status of the Hispanic/Latino community. <s> BIB001 </s> Asset Mapping as a Tool for Identifying Resources in Community Health: A Methodological Overview <s> Examples of Asset Mapping in Health and Wellness <s> Asset-based approaches seek to identify and mobilise the personal, social and organisational resources available to communities. Asset mapping is a recognised method of gathering an inventory of neighbourhood assets and is underpinned by a fundamentally different logic to traditional needs assessments. The aim of this paper is to explore how asset mapping might be used as a tool for health improvement. It reports on a qualitative evaluation of a pilot asset mapping project carried out in two economically disadvantaged neighbourhoods in Sheffield, UK. The project involved community health champions working with two community organisations to identify assets linked to the health and wellbeing of their neighbourhoods. The evaluation was undertaken in 2012 after mapping activities had been completed. A qualitative design, using theory of change methodology, was used to explore assumptions between activities, mechanisms and outcomes. Semi structured interviews were undertaken with a purposive sample of 11 stakeholders including champions, community staff and strategic partners. Thematic analysis was used and themes were identified on the process of asset mapping, the role of champions and the early outcomes for neighbourhoods and services. Findings showed that asset mapping was developmental and understandings grew as participatory activities were planned and implemented. The role of the champions was limited by numbers involved, nonetheless meaningful engagement occurred with residents which led to personal and social resources being identified. Most early outcomes were focused on the lead community organisations. There was less evidence of results feeding into wider planning processes because of the requirements for more quantifiable information. The paper discusses the importance of relational aspects of asset mapping both within communities and between communities and services. The conclusions are that it is insufficient to switch from the logic of needs to assets without building asset mapping as part of a broader planning process. <s> BIB002
|
A methodological concept can be better understood when examples of it being applied in different contexts BIB001 BIB002 are shown. For this reason, this section provides a quick overview of three different areas where asset maps have proven www.jBiomedAnalytics.org useful: health research, community engagement, and community partnerships. This is followed by a more in-depth look at a possible asset map compilation of health-related services available for the refugee population in Calgary.
|
Asset Mapping as a Tool for Identifying Resources in Community Health: A Methodological Overview <s> Community Engagement <s> As library systems across Canada begin to grapple with the implications community-led service planning has on program and service development, new tools are being developed to assist library staff. Asset mapping is one community entry tool which allows library staff to access community members (or organizational representatives) and gather in-depth information impacting services and program identification, development, and implementation – either across a library system or within a branch catchment. By using this tool, through face-to-face conversations with service providers, library staff find out about existing community assets (such as programs and services different organizations offer) and begin to develop relationships with community members receiving services. Asset mapping provides libraries with information to identify priority services that complement existing community resources. The information collected extends beyond a directory and is used to develop and deliver services relevant to the needs of community. ::: ::: This paper, based on a presentation given at the 2011 Canadian Library Association conference, discusses asset mapping as a first step to actively engaging community. As a first step in the engagement process, it specifically focuses on the process of asset mapping organizations that provide services to immigrants. Asset mapping is a powerful tool which can be implemented by librarians with any group of interest in order to understand community identified information needs, determine existing community strengths and assets, and to help understand the library's role in developing service and program responses to these needs. <s> BIB001 </s> Asset Mapping as a Tool for Identifying Resources in Community Health: A Methodological Overview <s> Community Engagement <s> Asset mapping has emerged as a promising tool for mobilizing and sustaining positive changes related to community health and wellbeing. In contrast to approaches that focus on communities' needs or deficits, asset mapping harnesses community resources in order to foster transformation and growth. In this article, the authors analyze asset mapping workshops, which focused on access to food and safe places to be active, that were conducted in two North Carolina (USA) study communities. The authors highlight the results of the workshops and show how they demonstrate the underlying values expressed by participants. Community members differ in what they value within existing community structures and what their priorities are in determining the direction of future efforts. This article argues that an understanding of why organizations are named as exemplary in their improvement of access to healthy foods or places to be active allows community members and leaders to connect assets in ways that are rooted in community values and the realities of existing community and social structures. <s> BIB002
|
Organizations and institutions can also use asset maps to assess and increase the engagement of the communities in which they operate. A study that identified the underlying values that connected certain organizations in their ability to increase access to food and safe places to be active did so by assessing asset mapping workshops that asked community members to identify exemplary organizations that had a beneficial impact on access to food and safe places to be active BIB002 . A second paper discussed the experience of Halifax Public Libraries in utilizing asset maps to better engage with the immigrant community in Halifax BIB001 . Upon realizing that service providers were a valuable resource to better understand the immigrant community, an asset list was created of the various providers that served the immigrant community, and an effort was made to create relationships with each resource.
|
Asset Mapping as a Tool for Identifying Resources in Community Health: A Methodological Overview <s> Step 4: Decide on what assets to Include <s> Abstract Background. Childhood obesity is an epidemic. Addressing this problem will require the input of many sectors and change in many behaviors. The “community” must be part of the solution, and the solution must be constructed on existing assets that lend strength to positive environmental change. Objective. To catalyze an established asset-based community partnership to support efforts to reduce television viewing time by developing and providing alternative activities as part of a broader, 3-year study to reduce childhood obesity among preschool-aged children in rural, upstate New York. Method. Asset mapping was utilized to compile an inventory of individual and community strengths upon which a partnership could be established. Facilitated focus group sessions were conducted to better understand childcare environmental policies and practices, and to guide changes conducive to health and fitness. Planning meetings and targeted outreach brought key stakeholders together for a community-participatory initiative to support positive environmental change. Results. The partnership planned and initiated an array of after-school and weekend community activities for preschool-aged children and their families in the weeks preceding, during, and following a designated ‘TV Turn-off’ week in April, 2004 and March, 2005. Conclusion. Methods of asset-based community development are an effective way to engage community participation in public health initiatives. <s> BIB001 </s> Asset Mapping as a Tool for Identifying Resources in Community Health: A Methodological Overview <s> Step 4: Decide on what assets to Include <s> This literature review is a discussion of asset-based approaches to community engagement. Following a literature search, we identified several asset mapping approaches: Asset-Based Community Development (ABCD); Participatory Inquiry into Religious Health Assets, Networks and Agency (PIRHANA); Community Health Assets Mapping for Partnerships (CHAMP); the Sustainable Livelihoods Approach (SLA); Planning for Real® and approaches using Geographic Information Systems (GIS). These approaches are framed by assumptions about ‘assets’, ‘needs’, and ‘community’ and their associated community engagement methods that may be influenced by dynamics related to conflict, competition and language. We conclude that asset mapping approaches derive their value from their capacities to support partnership building, consensus creation, and community agency and control. <s> BIB002
|
The classical assets mentioned in the introduction and depicted in Figure 5 highlight different categories that can be explored for the identification of assets. This step helps to identify what types of assets are of interest, based on the purpose of the asset map. For example, if the purpose of the asset map is to identify ways for public libraries to better serve immigrants, then special focus can be made to identify other neighbourhood institutions [such as service providers] and local associations that regularly service the immigrant community BIB001 . By understanding the importance of focusing on certain types of assets, more time and resources can be directed towards more fruitful asset types. Three different methods that can be used to focus on specific types of assets include the storytelling, heritage and whole assets approaches , among others BIB002 .
|
Sharing delay information in service systems: a literature survey <s> Do customers appreciate delay announcements? <s> The effect of different temporal and nontemporal cues on individuals' time perception was observed using data on actual and perceived time in retail checkout lines. Findings suggest the importance of considering a time perception approach to consumer behavior. <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> Do customers appreciate delay announcements? <s> Delays in service are becoming increasingly common; yet their effects on service evaluations are relatively unknown. The author presents a model of the wait experience, which assesses the effects o... <s> BIB002 </s> Sharing delay information in service systems: a literature survey <s> Do customers appreciate delay announcements? <s> Time is a resource. As such, consumers have to make decisions regarding their use of time in the purchase and consumption of goods and services. Using prospect theory and mental accounting as theoretical frameworks, this article investigates whether consumers treat time like money when they make decisions. In a series of studies, we found that the value of consumers' time is not constant but depends on contextual characteristics of the decision situation. Our results also suggest that in deterministic situations, people make decisions involving time losses in a manner consistent with the convex loss function proposed by prospect theory. However, in decision making under conditions of risk, people seem to make risk-averse choices with respect to decisions in the domain of time in contrast to the risk-seeking behavior often found with respect to decisions involving losses of money. We discuss the nonfungibility of time as an explanation for the discrepancy between decisions involving time and those involving money. <s> BIB003 </s> Sharing delay information in service systems: a literature survey <s> Do customers appreciate delay announcements? <s> The authors conduct an experimental study to examine the impact of two types of waiting information—waiting-duration information and queuing information—on consumers’ reactions to waits of differen... <s> BIB004 </s> Sharing delay information in service systems: a literature survey <s> Do customers appreciate delay announcements? <s> Abstract One of the primary e-commerce challenges on the World Wide Web is when users experience intolerably long waits for a website's homepage to load. Zona Research, Inc. estimates that over $4 billion in lost revenue is due to slow downloads over the Internet. When the loading time of a homepage exceeds the maximum amount of time that a Web user is willing to wait, a Web user will either redirect the web-browser to an alternative (e.g., competitor's) website or quit using the Web; an opportunity, at the moment and perhaps forever, is lost to not only serve, influence, or interact with, a potential customer, but also to advance the growth of e-commerce. Given the important role of a homepage as a portal to a website or to a host of websites, it is critical that a homepage design consider not only appearance and functionality, but also loading time. The Internet industry has been devoting significant attention to solving the waiting time problem with approaches that are technical or operational in nature, with those most promising being extremely expensive and time consuming to employ (e.g., fiber optic cable). These approaches have not, up to this point in time, yielded the desired results. This research describes a complementary marketing approach to reducing the negative impact of the waiting time problem; one that is based on the psychological theorizing of “anchoring and adjustment,” with implications that would be relatively inexpensive to implement. In experiments where all Web users experienced the same actual wait for a homepage to load, those exposed to a shorter waiting time anchor, both perceived as shorter the waiting time for, and evaluated as higher the quality of, a homepage; and when the waiting time anchor was less than the actual waiting time, the perceived waiting time was less than the actual waiting time. In addition, those exposed to the smaller waiting time anchor were more likely to continue searching the associated website as opposed to searching a different website. <s> BIB005 </s> Sharing delay information in service systems: a literature survey <s> Do customers appreciate delay announcements? <s> The truth of this assertion cannot be denied: there can be few consumers of services in a modern society who have not felt, at one time or another, each of the emotions identified by Federal Express' copywriters. What is more, each of us who can recall such experiences can also attest to the fact that the waiting-line experience in a service facility significantly affects our overall perceptions of the quality of service provided. <s> BIB006 </s> Sharing delay information in service systems: a literature survey <s> Do customers appreciate delay announcements? <s> Purpose – The purpose of this paper is to propose and test a model which defines the psychological processes that mediate the relationship between perceived wait duration (PWD) and satisfaction. This model will provide a framework for evaluating the impact of situational and environmental variables in the servicescape on customer reaction to the wait experience.Design/methodology/approach – The approach included one field study and two laboratory experiments in which subjects participated in a service with a pre‐process wait and evaluated their experience on a survey.Findings – Perceived wasted time, perceived control, perceived boredom, and perceived neglect mediated the relationship between PWD and wait experience evaluation. When tested using filled versus unfilled wait time as the situational variable, the model showed that having something to do during the wait decreased perceived boredom, resulting in a more positive wait experience.Research limitations/implications – The services used in this paper... <s> BIB007
|
For motivation, we begin by summarizing some of the main findings on the psychological impact of delay announcements. An important maxim in service science is that customers do not like the uncertainty associated with waiting. This finding has been confirmed with airline delays BIB002 , banks , and websites BIB005 . Aversion to the uncertainty in waiting is also underlined as one of the axioms in Maister BIB006 . More generally, Leclerc et al. BIB003 provide empirical evidence (via experimental study) that waiting may be viewed as a cost for delayed individuals. Delay announcements are useful because they are means to reducing that undesirable uncertainty. Another psychological benefit of delay announcements relates to the distinction between perceived time and actual time BIB001 . To be specific, the relationship between the perception of time and the evaluation of the waiting experience is mediated by several factors, including the perceived control over time BIB007 . Delay announcements are beneficial because they enable customers to have increased control over their waits. For example, if the waiting time is sufficiently long, a customer may elect to perform other tasks while waiting. Thus, customer waits may be perceived to be shorter. Even in settings where the delay information has no impact on the perceived duration of the wait, it typically has an impact on both the acceptability of the wait and the affective response to waiting BIB004 . Moreover, the announcements are usually helpful because they provide customers with a sense of progress during their waiting experiences .
|
Sharing delay information in service systems: a literature survey <s> Focus and aim <s> SOME DISCUSSION has arisen recently as to whether the imposition of an "entrance fee" on arriving customers who wish to be serviced by a station and hence join a waiting line is a rational measure. Not much of this discussion has appeared in print; indeed this author is aware of only three short communications, representing an exchange of arguments between Leeman [1, 2] and Saaty [3]. The ideas advanced there were of qualitative character and no attempt was made to quantify the arguments. The problem under consideration is obviously analogous to one that arises in connection with the control of vehicular traffic congestion on a road network. It has been argued2 by traffic economists that the individual car driver on making an optimal routing choice for himself-does not optimize the system at large. The purpose of this communication is to demonstrate that, indeed, analogous conclusions can be drawn for queueing models if two basic conditions are satisfied: <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> Focus and aim <s> The relationship between Pareto optimal (0s) and revenue maximizing (Or) tolls is examined for queuing models that permit balking. When customers have the same value for waiting time, Q, =Or provided the entrepreneur can impose a simple two-part tariff. With heterogeneous values for waiting time, Or can be greater than, equal to, or less than H,. Expanding the number of servers and charging multi-part tariffs are shown to be alternative methods for segmenting the market, and the welfare implications of these two strategies are explored. <s> BIB002 </s> Sharing delay information in service systems: a literature survey <s> Focus and aim <s> We consider memoryless twoline system with threshold jockeying. Upon arrival each customer decides whether to purchase the information about which line is shorter, or randomly selects one of the lines. Since the decision of a customer is affected by the decision of the others, we are interested in Nash-equilibrium policies. Indeed, we show explicitly how to find these policies. We are also interested in the externalities imposed by an informed customer on the others. We derive an explicit expression for these externalities in the case that jockeying takes place as soon as the lines differ by three. Some of the results may seem to be counterintuitive. For example, when the threshold is three, the value of information may increase with the portion of informed customers <s> BIB003 </s> Sharing delay information in service systems: a literature survey <s> Focus and aim <s> Preface. 1. Introduction. 2. Observable Queues. 3. Unobservable Queues. 4. Priorities. 5. Reneging and Jockeying. 6. Schedules and Retrials. 7. Competition Among Servers. 8. Service Rate Decisions. Index. <s> BIB004 </s> Sharing delay information in service systems: a literature survey <s> Focus and aim <s> A classic example that illustrates how observed customer behavior impacts other customers' decisions is the selection of a restaurant whose quality is uncertain. Customers often choose the busier restaurant, inferring that other customers in that restaurant know something that they do not. In an environment with random arrival and service times, customer behavior is reflected in the lengths of the queues that form at the individual servers. Therefore, queue lengths could signal two factors---potentially higher arrivals to the server or potentially slower service at the server. In this paper, we focus on both factors when customers' waiting costs are negligible. This allows us to understand how information externalities due to congestion impact customers' service choice behavior. ::: ::: In our model, based on private information about both the service-quality and queue-length information, customers decide which queue to join. When the service rates are the same and known, we confirm that it may be rational to ignore private information and purchase from the service provider with the longer queue when only one additional customer is present in the longer queue. We find that, due to the information externalities contained in queue lengths, there exist cycles during which one service firm is thriving whereas the other is not. Which service provider is thriving depends on luck; i.e., it is determined by the private signal of the customer arriving when both service providers are idle. These phenomena continue to hold when each service facility has multiple servers, or when a facility may go out of business when it cannot attract customers for a certain amount of time. Finally, we find that when the service rates are unknown but are negatively correlated with service values, our results are strengthened; long queues are now doubly informative. The market share of the high-quality firm is higher when there is service rate uncertainty, and it increases as the service rate decreases. When the service rates are positively correlated with unknown service values, long queues become less informative and customers might even join shorter queues. <s> BIB005 </s> Sharing delay information in service systems: a literature survey <s> Focus and aim <s> Classical models of customer decision making in unobservable queues assume acquiring queue length information is too costly. However, due to recent advancements in communication technology, various services now make this kind of information accessible to customers at a reasonable cost. In our model, which reflects this new opportunity, customers choose among three options: join the queue, balk, or inspect the queue length before deciding whether to join. Inspection is associated with a cost. We compute the equilibrium in this model and prove its existence and uniqueness. Based on two normalized parameters—congestion and service valuation—we map all possible input parameter sets into three scenarios. Each scenario is characterized by a different impact of inspection cost on equilibrium and revenue-maximization queue disclosure policy: fully observable (when inspection cost is very low), fully unobservable (when inspection cost is too high), or observable by demand (when inspection cost is at an intermediat... <s> BIB006 </s> Sharing delay information in service systems: a literature survey <s> Focus and aim <s> The first-in, first-out (FIFO) queue discipline respects the order of arrival but is not efficient when customers have heterogeneous waiting costs. Priority queues, in which customers with higher waiting costs are served before customers with lower waiting costs, are more efficient but usually involve undesirable queue-jumping behaviors that violate bumped customers' property rights over their waiting spots. To have the best of both worlds, we propose time trading mechanisms, in which customers who are privately informed about their waiting costs mutually agree on the ordering in the queue by trading positions. If a customer ever moves back in the queue, she will receive an appropriate monetary compensation. Customers can always decide not to participate in trading and retain their positions as if they are being served FIFO. We design optimal mechanisms for the social planner, the service provider, and an intermediary who might mediate the trading platform. Both the social planner's and the service provider's optimal mechanisms involve a flat admission fee and an auction that implements strict priority. If a revenue-maximizing intermediary operates the trading platform, it should charge a trade participation fee and implement an auction with some restrictions on customer trade. Therefore, customers are not strictly prioritized. However, relative to a FIFO system, the intermediary delivers value to the social planner by improving efficiency, and to the service provider by increasing its revenue. <s> BIB007
|
In this survey, we restrict attention to papers where the firm decides on whether and how to communicate delay information to its customers. In particular, customers cannot search for this information, nor can they acquire it themselves, for example, as in Hassin and Haviv BIB003 , Hassin and Roet-Green BIB006 , and Yang et al. BIB007 . We restrict attention to sharing waiting-time information, and do not include papers which consider alternative forms of shared information, for example, on the service quality or the service rate; for example, see Hassin and Veeraraghavan and Debo BIB005 . Also, because the queueing-theoretic literature which studies properties of waiting times is vast, we restrict attention to papers that relate specifically to delay announcements. The first mathematical model of a queueing system with rational customers is Naor BIB001 , where the queue is assumed to be observable to customers; the first unobservable model is studied in Edelson and Hilderbrand BIB002 . Numerous extensions to both models have been considered in the queueing-games literature, and the majority of those papers are relevant, albeit indirectly, to the problem of sharing delay information in queueing systems; see Hassin and Haviv BIB004 and Hassin for comprehensive surveys. Of those papers, we only consider ones which compare, in a broad sense, the observable and unobservable models. Essentially, this amounts to quantifying the value of sharing delay information. In this survey paper, our objectives are: (i) to classify and systematically review the relevant papers; (ii) to identify the main challenges entailed in the different approaches to the problem; (iii) to synthesize some key findings of the literature; and (iv) to identify gaps in the literature and formulate research directions which would be interesting to investigate in the future.
|
Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> The authors conduct an experimental study to examine the impact of two types of waiting information—waiting-duration information and queuing information—on consumers’ reactions to waits of differen... <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> This paper investigates the possibility of predicting each customer's waiting time in queue before starting service in a multiserver service system with the first-come first-served service discipline, such as a telephone call center. A predicted waiting-time distribution or an appropriate summary statistic such as the mean or the 90th percentile may be communicated to the customer upon arrival and possibly thereafter in order to improve customer satisfaction. The predicted waiting-time distribution may also be used by the service provider to better manage the service system, e.g., to help decide when to add additional service agents. The possibility of making reliable predictions is enhanced by exploiting information about system state, including the number of customers in the system ahead of the current customer. Additional information beyond the number of customers in the system may be obtained by classifying customers and the service agents to which they are assigned. For nonexponential service times, the elapsed service times of customers in service can often be used to advantage to compute conditional-remaining-service-time distributions. Approximations are proposed to convert the distributions of remaining service times into the distribution of the desired customer waiting time. The analysis reveals the advantage from exploiting additional information. <s> BIB002 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> It takes time to process purchases and as a result a queue of customers may form. The pricing and capacity (service rate) decision of a monopolist who must take this into account are characterized. We find that an increase in the average number of customers arriving in the market either has no effect on the price, or else causes the firm to reduce the price in the short run. In the long run the firm will increase capacity and raise the price. When customer preferences are linear, the equilibrium is socially efficient. When preferences are not linear, the equilibrium will not normally be socially efficient. <s> BIB003 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> The widespread adoption of Enterprise Resource Planning (ERP) systems has, among many other benefits, increased the ability of a firm to share operational data with customers. In this paper we analyze the factors that determine whether or not sharing a specific type of information, namely state-dependent lead time information, can benefit a firm. We develop a stochastic model of a custom-production environment, in which customers are handled on a first-come first-served basis but have differing tolerances for waiting. The firm has the option to share different amounts of information about the lead time a potential customer may incur. Although the information differs across scenarios, the reliability of that information in terms of the probability that a stated lead time is met is equal in the eyes of the customers. We derive conditions under which sharing more information with customers improves the firm's profits and the customers' experiences. We show that it is not always the case that sharing information improves the lot of the firm. We show that when customers' tolerances for waiting are more heterogeneous then the benefit to the firm from sharing lead time information increases. Our conclusion is that management should only authorize sharing detailed lead time information, be it through information system integration or frontline sales people, after a careful analysis of a customer's sensitivity to delay. <s> BIB004 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> Information about delays can enhance service quality in many industries. Delay information can take many forms, with different degrees of precision. Different levels of information have different effects on customers and therefore on the overall system. To explore these effects, we consider a queue with balking under three levels of delay information: no information, partial information (the system occupancy), and full information (the exact waiting time). We assume Poisson arrivals, independent exponential service times, and a single server. Customers decide whether to stay or balk based on their expected waiting costs, conditional on the information provided. We show how to compute the key performance measures in the three systems, obtaining closed-form solutions for special cases. We then compare the three systems. We identify some important cases where more accurate delay information improves performance. In other cases, however, information can actually hurt the provider or the customers. <s> BIB005 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> We consider a single server Markovian queue with setup times. Whenever this system becomes empty, the server is turned off. Whenever a customer arrives to an empty system, the server begins an exponential setup time to start service again. We assume that arriving customers decide whether to enter the system or balk based on a natural reward-cost structure, which incorporates their desire for service as well as their unwillingness to wait. ::: ::: We examine customer behavior under various levels of information regarding the system state. Specifically, before making the decision, a customer may or may not know the state of the server and/or the number of present customers. We derive equilibrium strategies for the customers under the various levels of information and analyze the stationary behavior of the system under these strategies. We also illustrate further effects of the information level on the equilibrium behavior via numerical experiments. <s> BIB006 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> This article generalizes the models in Guo and Zipkin, who focus on exponential service times, to systems with phase-type service times. Each arriving customer decides whether to stay or balk based on his expected waiting cost, conditional on the information provided. We show how to compute the throughput and customers' average utility in each case. We then obtain some analytical and numerical results to assess the effect of more or less information. We also show that service-time variability degrades the system's performance. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008 <s> BIB007 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> Abstract We consider simple parallel queueing models in which a proportion of arriving customers are flexible, i.e. they are willing to receive service at any one of some subset of the parallel servers. For the case of two parallel servers, we show that as the servers become fully utilized, the maximum improvement in mean waiting times is achieved for arbitrarily small levels of flexibility. The insights from this analytic model are supported by simulation results that show that large gains can be made with low levels of flexibility. The potential implications of these results for two motivating examples are discussed. <s> BIB008 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> We use heavy-traffic limits and computer simulation to study the performance of alternative real-time delay estimators in the overloaded GI/GI/s+GI multiserver queueing model, allowing customer abandonment. These delay estimates may be used to make delay announcements in call centers and related service systems. We characterize performance by the expected mean squared error in steady state. We exploit established approximations for performance measures with a nonexponential abandonment-time distribution to obtain new delay estimators that effectively cope with nonexponential abandonment-time distributions. <s> BIB009 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> We develop new improved real-time delay estimators, based on recent customer delay history, in many-server service systems with time-varying arrivals, both with and without customer abandonment. These delay estimators may be used to make delay announcements. We model the arrival process by a nonhomogeneous Poisson process, which has a deterministic time-varying arrival-rate function. Our estimators eectively cope with time-varying arrivals <s> BIB010 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> Information systems have been widely adopted to support service processes in various domains, e.g., in the telecommunication, finance, and health sectors. Recently, work on process mining showed how management of these processes, and engineering of supporting systems, can be guided by models extracted from the event logs that are recorded during process operation. In this work, we establish a queueing perspective in operational process mining. We propose to consider queues as first-class citizens and use queueing theory as a basis for queue mining techniques. To demonstrate the value of queue mining, we revisit the specific operational problem of online delay prediction: using event data, we show that queue mining yields accurate online predictions of case delay. <s> BIB011 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> The problem of estimating delays experienced by customers with different priorities, and the determination of the appropriate delay announcement to these customers, in a multi-class call center with time varying parameters, abandonments, and retrials is considered. The system is approximately modeled as an M(t)/M/s(t) queue with priorities, thus ignoring some of the real features like abandonments and retrials. Two delay estimators are proposed and tested in a series of simulation experiments. Making use of actual state-dependent waiting time data from this call center, the delay announcements from the estimated delay distributions that minimize a newsvendor-like cost function are considered. The performance of these announcements is also compared to announcing the mean delay. We find that an Erlang distribution-based estimator performs well for a range of different under-announcement penalty to over-announcement penalty ratios. <s> BIB012 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> Information systems have been widely adopted to support service processes in various domains, e.g., in the telecommunication, finance, and health sectors. Information recorded by systems during the operation of these processes provides an angle for operational process analysis, commonly referred to as process mining. In this work, we establish a queueing perspective in process mining to address the online delay prediction problem, which refers to the time that the execution of an activity for a running instance of a service process is delayed due to queueing effects. We present predictors that treat queues as first-class citizens and either enhance existing regression-based techniques for process mining or are directly grounded in queueing theory. In particular, our predictors target multi-class service processes, in which requests are classified by a type that influences their processing. Further, we introduce queue mining techniques that derive the predictors from event logs recorded by an information system during process execution. Our evaluation based on large real-world datasets, from the telecommunications and financial sectors, shows that our techniques yield accurate online predictions of case delay and drastically improve over predictors neglecting the queueing perspective. <s> BIB013 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> This paper proposes the Q-Lasso method for wait time prediction, which combines statistical learning with fluid model estimators. In historical data from four remarkably different hospitals, Q-Lasso predicts the emergency department ED wait time for low-acuity patients with greater accuracy than rolling average methods currently used by hospitals, fluid model estimators from the service operations management literature, and quantile regression methods from the emergency medicine literature. Q-Lasso achieves greater accuracy largely by correcting errors of underestimation in which a patient waits for longer than predicted. Implemented on the external website and in the triage room of the San Mateo Medical Center SMMC, Q-Lasso achieves over 30% lower mean squared prediction error than would occur with the best rolling average method. The paper describes challenges and insights from the implementation at SMMC. <s> BIB014 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> We are interested in predicting the wait time of customers upon their arrival in some service system such as a call center or emergency service. We propose two new predictors that are very simple to implement and can be used in multiskill settings. They are based on the wait times of previous customers of the same class. The first one estimates the delay of a new customer by extrapolating the wait history (so far) of customers currently in queue, plus the last one that started service, and taking a weighted average. The second one takes a weighted average of the delays of the past customers of the same class that have found the same queue length when they arrived. In our simulation experiments, these new predictors are very competitive with the optimal ones for a simple queue, and for multiskill centers they perform better than other predictors of comparable simplicity. <s> BIB015 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> Motivated by the recent interest in making delay announcements in large service systems, such as call centers, we investigate the accuracy of announcing the waiting time of the last customer to enter service (LES). In practice, customers typically respond to delay announcements by either balking or by becoming more or less impatient, and their response alters system performance. We study the accuracy of the LES announcement in single-class, multiserver Markovian queueing models with announcement-dependent customer behavior. We show that, interestingly, even in this stylized setting, the LES announcement may not always be accurate. This motivates the need to study its accuracy carefully and to determine conditions under which it is accurate. Since the direct analysis of the system with customer response is prohibitively difficult, we focus on many-server, heavy-traffic analysis instead. We consider the quality-and-efficiency-driven and efficiency-driven many-server, heavy-traffic regimes and prove, under b... <s> BIB016 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> We explore whether customers are loss averse in time and how the amount of delay information available may impact such reference-dependent behavior by conducting a field experiment at a call center. Our results show that customers exhibit loss averse regardless of the availability or accuracy of the delay information. While delay announcements may not alter the fact that customers are loss averse, they do seem to impact the reference points customers use when the announcements are accurate. However, when those announcements are not accurate, customers may completely disregard them. <s> BIB017 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> We study how to use delay announcements to manage customer expectations while allowing the firm to prioritize among customers with different sensitivities to time and value. We examine this problem by developing a framework which characterizes the strategic interaction between the firm and heterogeneous customers. When the firm has information about the state of the system, yet lacks information on customer types, delay announcements play a dual role: they inform customers about the state of the system, while they also have the potential to elicit information on customer types based on their response to the announcements. The tension between these two goals has implications to the type of information that can be shared credibly. To explore the value of the information on customer types, we also study a model where the firm can observe customer types. We show that having information on the customer type may improve or hurt the credibility of the firm. While the creation of credibility increases the firm's profit, the loss of credibility does not necessarily hurt its profit. <s> BIB018 </s> Sharing delay information in service systems: a literature survey <s> A bird's eye view: key insights <s> We investigate the impact of delay announcements on the coordination within hospital networks using a combination of empirical observations and numerical experiments. We offer empirical evidence th... <s> BIB019
|
In what follows, we synthesize some key findings of the literature, i.e., we address objective (iii) above; detailed descriptions of each of the papers referenced below are relegated to later sections of this survey. Heterogeneity can be exploited through the announcements. In a setting where delay information is shared with customers, one general insight is that alternative levels of heterogeneity can be effectively managed, through the provision of delay announcements, to lead to superior outcomes. In that sense, the announcements may be viewed as a type of pricing tool which segments the customer population in an appropriate way, for example, as in priority pricing . For one example, with a homogeneous customer population, the manager can benefit from "creating" heterogeneity by controlling the breadth of shared real-time congestion information; indeed, having both informed and uninformed customers can lead to improved throughput, social welfare, or operational performance . For another example, heterogeneity in customers' tolerances for waiting can be effectively managed through the provision of delay announcements to lead to increased throughput and social welfare BIB004 BIB005 . For yet another example, unobservable heterogeneity in customer types (reward from service and waiting cost) can be managed by the announcements to lead to increased profits BIB018 . For a last example, the heterogeneity in service capacities, between two competing service providers, makes sharing real-time delay information beneficial, for both market share and operational performance, for the low-capacity firm . More information is not always better. One may have different objectives in mind when assessing the value of providing delay information, and those objectives may be impacted by that information in different ways. While the value of information provision is usually context-dependent, one general principle is that providing more information need not always lead to improved performance and may even be detrimental. From a human psychology angle, customers do not always prefer more granular information BIB014 BIB001 . For both social welfare and throughput, less granular delay information may be beneficial ( BIB006 BIB003 BIB005 BIB007 , etc.). Moreover, non-verifiable and non-quantifiable information may improve both the firm's profit and the expected utility of customers . Finally, under certain conditions, providing delay information may make the system more volatile and can lead to longer delays on average BIB019 BIB008 . Of course, providing delay announcements helps in many cases. In particular, another general insight is that providing real-time delay information usually yields the greatest benefit, for example, for profit, social welfare, and throughput, when the system experiences heavy congestion ( BIB003 BIB008 , etc.). Also, from an accuracy perspective, various delay predictors can be proved to have a superior performance under such high-congestion conditions as well, particularly when the system is large BIB009 . There is no single "best" announcement. There is no universal best way to predict waiting times, and the accuracy of a specific announcement depends on both the amount of state information available, and the specific modeling context BIB002 . Thus, there is a need to consider several such contexts and to study performance under each specific setting. There are also different measures of performance, ranging from the average error, for example, using the MSE, to penalizing under or overestimation BIB012 . In broad terms, under the MSE criterion, and conditional on some system-state information, for example, the queue length, the conditional expectation of the waiting time, given that information, is the most accurate prediction. While calculating conditional expectations is possible under certain conditions, it is, generally, a difficult task. Moreover, those resulting conditional expected values tend to perform poorly when the specific modeling assumptions under which they were derived fail to hold . Thus, one needs to consider alternative, and simpler, ways to predict delays, for example, by exploiting the recent history of delays in the system . Such delay-history-based predictions can perform remarkably well, for example, in large heavily congested systems with or without customer abandonment, even when customers respond to the announcements BIB016 . There is also some empirical evidence substantiating their good performance in practice BIB011 BIB013 . However, they do not perform well in other settings, such as when the system is small or lightly loaded BIB015 BIB017 , or under time-varying conditions BIB010 . The main takeaway is this: While the literature does not give us a conclusive answer as to what type of announcement to use under all circumstances, it does provide valuable insights on the appropriateness of various announcements in different settings.
|
Sharing delay information in service systems: a literature survey <s> Data methods and queueing-theoretic methods are complementary. <s> Motivated by practices in customer contact centers, we consider a system that offers two modes of service: real-time and postponed with a delay guarantee. Customers are informed of anticipated delays and select their preferred option of service. The resulting system is a multiclass, multiserver queueing system with state-dependent arrival rates. We propose an estimation scheme for the anticipated real-time delay that is asymptotically correct, and a routing policy that is asymptotically optimal in the sense that it minimizes real-time delay subject to the deadline of the postponed service mode. We also show that our proposed state-dependent scheme performs better than a system in which customers make decisions based on steady-state waiting-time information. Our results are derived using an asymptotic analysis based on "many-server" limits for systems with state-dependent parameters. <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> Data methods and queueing-theoretic methods are complementary. <s> Information systems have been widely adopted to support service processes in various domains, e.g., in the telecommunication, finance, and health sectors. Recently, work on process mining showed how management of these processes, and engineering of supporting systems, can be guided by models extracted from the event logs that are recorded during process operation. In this work, we establish a queueing perspective in operational process mining. We propose to consider queues as first-class citizens and use queueing theory as a basis for queue mining techniques. To demonstrate the value of queue mining, we revisit the specific operational problem of online delay prediction: using event data, we show that queue mining yields accurate online predictions of case delay. <s> BIB002 </s> Sharing delay information in service systems: a literature survey <s> Data methods and queueing-theoretic methods are complementary. <s> Information systems have been widely adopted to support service processes in various domains, e.g., in the telecommunication, finance, and health sectors. Information recorded by systems during the operation of these processes provides an angle for operational process analysis, commonly referred to as process mining. In this work, we establish a queueing perspective in process mining to address the online delay prediction problem, which refers to the time that the execution of an activity for a running instance of a service process is delayed due to queueing effects. We present predictors that treat queues as first-class citizens and either enhance existing regression-based techniques for process mining or are directly grounded in queueing theory. In particular, our predictors target multi-class service processes, in which requests are classified by a type that influences their processing. Further, we introduce queue mining techniques that derive the predictors from event logs recorded by an information system during process execution. Our evaluation based on large real-world datasets, from the telecommunications and financial sectors, shows that our techniques yield accurate online predictions of case delay and drastically improve over predictors neglecting the queueing perspective. <s> BIB003 </s> Sharing delay information in service systems: a literature survey <s> Data methods and queueing-theoretic methods are complementary. <s> This paper proposes the Q-Lasso method for wait time prediction, which combines statistical learning with fluid model estimators. In historical data from four remarkably different hospitals, Q-Lasso predicts the emergency department ED wait time for low-acuity patients with greater accuracy than rolling average methods currently used by hospitals, fluid model estimators from the service operations management literature, and quantile regression methods from the emergency medicine literature. Q-Lasso achieves greater accuracy largely by correcting errors of underestimation in which a patient waits for longer than predicted. Implemented on the external website and in the triage room of the San Mateo Medical Center SMMC, Q-Lasso achieves over 30% lower mean squared prediction error than would occur with the best rolling average method. The paper describes challenges and insights from the implementation at SMMC. <s> BIB004 </s> Sharing delay information in service systems: a literature survey <s> Data methods and queueing-theoretic methods are complementary. <s> We undertake an empirical study of the impact of delay announcements on callers’ abandonment behavior and the performance of a call center with two priority classes. A Cox regression analysis reveals that in this call center, callers’ abandonment behavior is affected by the announcement messages heard. To account for this, we formulate a structural estimation model of callers’ (endogenous) abandonment decisions. In this model, callers are forward-looking utility maximizers and make their abandonment decisions by solving an optimal stopping problem. Each caller receives a reward from service and incurs a linear cost of waiting. The reward and per-period waiting cost constitute the structural parameters that we estimate from the data of callers’ abandonment decisions as well as the announcement messages heard. The call center performance is modeled by a Markovian approximation. The main methodological contribution is the definition of an equilibrium in steady state as one where callers’ expectation of their waiting time, which affects their (rational) abandonment behavior, matches their actual waiting time in the call center, as well as the characterization of such an equilibrium as the solution of a set of nonlinear equations. A counterfactual analysis shows that callers react to longer delay announcements by abandoning earlier, that less patient callers as characterized by their reward and cost parameters react more to delay announcements, and that congestion in the call center at the time of the call affects caller reactions to delay announcements. <s> BIB005 </s> Sharing delay information in service systems: a literature survey <s> Data methods and queueing-theoretic methods are complementary. <s> In this paper, we explore the impact of delay announcements using an empirical approach by analyzing the data from a medium-sized call center. We first explore the question of whether delay announcements impact customers’ behavior using a nonparametric approach. The answer to this question appears to be ambiguous. We thus turn to investigate the fundamental mechanism by which delay announcements impact customer behavior, by constructing a dynamic structural model. In contrast to the implicit assumption made in the literature that announcements do not directly impact customers’ waiting costs, our key insights show that delay announcements not only impact customers’ beliefs about the system but also directly impact customers’ waiting costs. In particular, customers’ per-unit waiting cost decreases with the offered waiting times associated with the announcements. The results of our counterfactual analysis show that it may not be necessary to provide announcements with very fine granularity. This paper was accepted by Yossi Aviv, operations management . <s> BIB006
|
The recent proliferation of empirical studies, in the context of delay announcements, prompts one to evaluate the alternative methods that are used to address that problem. In broad terms, the literature ranges from analytical work, typically substantiated by simulation-based results ( BIB001 , etc.), to empirical work in the context of a well-defined structural model ( BIB005 BIB006 , etc.), to work which relies, for the most part, on data-mining meth-ods BIB004 BIB002 BIB003 . Each body of work is important in its own right, and it is crucial to emphasize the complementarity of those different approaches. Indeed, while relying on queueing models is instrumental to gain insight into performance and, importantly, allows for a mathematical framework through which controlling that performance is made possible, queueing-theoretic methods typically lack robustness in that they remain intimately tied to the specific technical assumptions under which the analysis is derived. Empirical studies in the context of a well-defined structural model on customer utility have been instrumental in both validating existing models on customer response to the announcements, and extending those models as well. Grounded in both empirical evidence and theoretic analysis, they enable a better management of delay announcements in practice. Data-mining methods are clearly superior in terms of accuracy. Thus, if accuracy is the sole objective in mind, then there seems to be little value in going beyond them. However, data-mining techniques are limited in that they are "black-box" techniques that do not, in general, further our understanding about the dynamics of the system. Recently, the combination of those two frameworks (queueing and data-based) has been advocated in several papers BIB004 BIB002 BIB003 . Indeed, the delay predictors in those papers are inspired by both queueing-theoretic methods and data-mining techniques and are shown to yield superior performance with real-life data sets. In the same spirit, Bassamboo and Ibrahim propose a correlation-based approach to quantify the accuracy of delay announcements across different queueing models. That approach enables an easier assessment of that accuracy with real-life data, which circumvents the need to fit entire queueing models to data in order to gain insight into performance.
|
Sharing delay information in service systems: a literature survey <s> Customers as queued entities <s> Praise for the Third Edition: "This is one of the best books available. Its excellent organizational structure allows quick reference to specific models and its clear presentation . . . solidifies the understanding of the concepts being presented."IIE Transactions on Operations EngineeringThoroughly revised and expanded to reflect the latest developments in the field, Fundamentals of Queueing Theory, Fourth Edition continues to present the basic statistical principles that are necessary to analyze the probabilistic nature of queues. Rather than presenting a narrow focus on the subject, this update illustrates the wide-reaching, fundamental concepts in queueing theory and its applications to diverse areas such as computer science, engineering, business, and operations research.This update takes a numerical approach to understanding and making probable estimations relating to queues, with a comprehensive outline of simple and more advanced queueing models. Newly featured topics of the Fourth Edition include:Retrial queuesApproximations for queueing networksNumerical inversion of transformsDetermining the appropriate number of servers to balance quality and cost of serviceEach chapter provides a self-contained presentation of key concepts and formulae, allowing readers to work with each section independently, while a summary table at the end of the book outlines the types of queues that have been discussed and their results. In addition, two new appendices have been added, discussing transforms and generating functions as well as the fundamentals of differential and difference equations. New examples are now included along with problems that incorporate QtsPlus software, which is freely available via the book's related Web site.With its accessible style and wealth of real-world examples, Fundamentals of Queueing Theory, Fourth Edition is an ideal book for courses on queueing theory at the upper-undergraduate and graduate levels. It is also a valuable resource for researchers and practitioners who analyze congestion in the fields of telecommunications, transportation, aviation, and management science. <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> Customers as queued entities <s> Experiencing Statistical Regularity.- Random Walks in Applications.- The Framework for Stochastic-Process Limits.- A Panorama of Stochastic-Process Limits.- Heavy-Traffic Limits for Fluid Queues.- Unmatched Jumps in the Limit Process.- More Stochastic-Process Limits.- Fluid Queues with On-Off Sources.- Single-Server Queues.- Multi-Server Queues.- More on the Mathematical Framework.- The Space D Useful Functions.- Queueing Networks.- The Spaces E and F.- Appendices. <s> BIB002
|
We begin by surveying papers which treat customers as queued entities that do not react to the announcements that they receive. For the most part, this branch of the literature focuses on studying ways of accurately predicting future waiting times. This is important for two main reasons: (i) from a practical perspective, systematically making inaccurate announcements may lead to customer distrust in those announcements and, ultimately, customer dissatisfaction with the service provided; and (ii) from an analytical perspective, studying waiting times in queueing systems allows for the derivation of structural results which are useful for our general understanding of those models. In broad terms, two types of methods are typically used for predicting waiting times: Queueing-theoretic and data-based. For queueing-theoretic methods ( § 2.2), the focus is on systematically considering alternative queueing models, and studying the accuracy of various real-time delay predictors in those models. The predictors may exploit different types of information about the state of the system at the time of the announcement, for example, the queue length or the history of recent delays. Relying on data-based methods for delay prediction ( § 2.3) is relatively recent, and it usually allows for superior predictive power. For background on the analysis of queueing systems and their approximations, we refer the reader to, for example, Gross BIB001 , Billingsley , and Whitt BIB002 . For a primer on data-mining methods, we refer the reader to Tan et al. .
|
Sharing delay information in service systems: a literature survey <s> Snapshot of the main challenges <s> SOME DISCUSSION has arisen recently as to whether the imposition of an "entrance fee" on arriving customers who wish to be serviced by a station and hence join a waiting line is a rational measure. Not much of this discussion has appeared in print; indeed this author is aware of only three short communications, representing an exchange of arguments between Leeman [1, 2] and Saaty [3]. The ideas advanced there were of qualitative character and no attempt was made to quantify the arguments. The problem under consideration is obviously analogous to one that arises in connection with the control of vehicular traffic congestion on a road network. It has been argued2 by traffic economists that the individual car driver on making an optimal routing choice for himself-does not optimize the system at large. The purpose of this communication is to demonstrate that, indeed, analogous conclusions can be drawn for queueing models if two basic conditions are satisfied: <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> Snapshot of the main challenges <s> The relationship between Pareto optimal (0s) and revenue maximizing (Or) tolls is examined for queuing models that permit balking. When customers have the same value for waiting time, Q, =Or provided the entrepreneur can impose a simple two-part tariff. With heterogeneous values for waiting time, Or can be greater than, equal to, or less than H,. Expanding the number of servers and charging multi-part tariffs are shown to be alternative methods for segmenting the market, and the welfare implications of these two strategies are explored. <s> BIB002 </s> Sharing delay information in service systems: a literature survey <s> Snapshot of the main challenges <s> Two different kinds of heavy-traffic limit theorems have been proved for s -server queues. The first kind involves a sequence of queueing systems having a fixed number of servers with an associated sequence of traffic intensities that converges to the critical value of one from below. The second kind, which is often not thought of as heavy traffic, involves a sequence of queueing systems in which the associated sequences of arrival rates and numbers of servers go to infinity while the service time distributions and the traffic intensities remain fixed, with the traffic intensities being less than the critical value of one. In each case the sequence of random variables depicting the steady-state number of customers waiting or being served diverges to infinity but converges to a nondegenerate limit after appropriate normalization. However, in an important respect neither procedure adequately represents a typical queueing system in practice because in the (heavy-traffic) limit an arriving customer is either almost certain to be delayed (first procedure) or almost certain not to be delayed (second procedure). Hence, we consider a sequence of ( GI / M / S ) systems in which the traffic intensities converge to one from below, the arrival rates and the numbers of servers go to infinity, but the steady-state probabilities that all servers are busy are held fixed. The limits in this case are hybrids of the limits in the other two cases. Numerical comparisons indicate that the resulting approximation is better than the earlier ones for many-server systems operating at typically encountered loads. <s> BIB003 </s> Sharing delay information in service systems: a literature survey <s> Snapshot of the main challenges <s> Using a heavy traffic limit theorem for open queueing networks, we find the correct diffusion approximation (D.A.) for sojourn times in Jackson networks with single server stations. The D.A. for sojourn times is a function of the D.A. for the queue length process, which is reflected Brownian motion on the nonnegative orthant. <s> BIB004 </s> Sharing delay information in service systems: a literature survey <s> Snapshot of the main challenges <s> The most common model to support workforce management of telephone call centers is theM/ M/ N/ B model, in particular its special casesM/ M/ N (Erlang C, which models out busy signals) andM/ M/ N/ N (Erlang B, disallowing waiting). All of these models lack a central prevalent feature, namely, that impatient customers might decide to leave (abandon) before their service begins.In this paper, we analyze the simplest abandonment model, in which customers' patience is exponentially distributed and the system's waiting capacity is unlimited ( M/ M/ N +M). Such a model is both rich and analyzable enough to provide information that is practically important for call-center managers. We first outline a method for exact analysis of theM/ M/ N +M model, that while numerically tractable is not very insightful. We then proceed with an asymptotic analysis of theM/ M/ N +M model, in a regime that is appropriate for large call centers (many agents, high efficiency, high service level). Guided by the asymptotic behavior, we derive approximations for performance measures and propose "rules of thumb" for the design of large call centers. We thus add support to the growing acknowledgment that insights from diffusion approximations are directly applicable to management practice. <s> BIB005 </s> Sharing delay information in service systems: a literature survey <s> Snapshot of the main challenges <s> To provide useful practical insight into the performance of service-oriented (non-revenue-generating) call centers, which often provide low-to-moderate quality of service, this paper investigates the efficiency-driven (ED), many-server heavy-traffic limiting regime for queues with abandonments. Attention is focused on theM/M/s/r + M model, having a Poisson arrival process, exponential service times,s servers,r extra waiting spaces, exponential abandon times (the final + M), and the first-come-first-served service discipline. Both the number of servers and the arrival rate are allowed to increase, while the individual service and abandonment rates are held fixed. The key is how the two limits are related: In the now common quality-and-efficiency-driven (QED) or Halfin-Whitt limiting regime, the probability of initially being delayed approaches a limit strictly between 0 and 1, while the probability of eventually being served (not abandoning) approaches 1. In contrast, in the ED limiting regime, the probability of eventually being served approaches a limit strictly between 0 and 1, while the probability of initially being delayed approaches 1. To obtain the ED regime, it suffices to let the arrival rate and the number of servers increase with the traffic intensity ? held fixed with ? > 1 (so that the arrival rate exceeds the maximum possible service rate). The ED regime can be realistic because with the abandonments, the delays need not be extraordinarily large. When the ED appropriations are appropriate, they are appealing because they are remarkably simple. <s> BIB006
|
Usually, the modeling framework adopted in the line of literature is a G/G I /s + G I multi-server queueing system, which has a general stationary arrival process, independent and identically distributed (IID) service times with a general distribution, s homogeneous servers working in parallel, a first-come-first-served discipline, unlimited waiting space and IID times for waiting customers to abandon, again with a general distribution. For tractability, the all-Markovian M/M/s + M model is typically considered instead. To measure accuracy, one must first decide on an appropriate measure. Typically, average measures of accuracy are used, for example, the mean-squared error (MSE), which incorporates both the variance of the estimator and its bias. Under the MSE criterion, the conditional expectation of the waiting time, given some state information, is the most accurate prediction (there is no bias in this case). However, calculating such expected values is generally hard, and there is usually a need to resort to alternative predictions. The relative MSE, which is equal to the MSE divided by the expected waiting time, is useful for a relative measure of accuracy. One can also rely on accuracy measures which penalize overestimation and underestimation, for example, by using a newsvendor-like objective where different costs are assigned to each. Assessing the predictive power of alternative estimators is usually done through a combination of analytical and numerical methods. On the one hand, deriving closedform expressions for prediction errors allows for an understanding of the dependence of those errors on alternative model parameters; on the other hand, detailed simulation studies allow for the extension of theoretical results to realistic settings which are not amenable to direct analysis. To illustrate the complexity in doing direct analysis, let us consider an announcement which is equal to the delay of the Last customer to have Entered Service (LES) at the time of arrival of the new delayed customer. In what follows, we deliberately keep our exposition at a high level to convey key intuition. The LES announcement is accurate if the (stochastic) state of the system that the LES customer encounters upon arrival, for example, the queue length, is "not too different" from the state that the new customer, to whom the announcement is made, encounters. In other words, we need to determine whether the time scale at which the state of the system changes is "much larger" than the magnitude of the LES delay; if so, then the state of the system would not change considerably during the LES delay, i.e., the LES delay should be an accurate prediction. Because doing direct analysis is prohibitively difficult, there is a need to resort to approximations. This is often possible by relying on a many-server heavy-traffic framework, where results on asymptotic accuracy are derived. There is no single way of defining asymptotic accuracy; usually, a properly scaled sequence of differences between wait-time estimators and corresponding delays is shown to converge to 0, for example, in a distributional sense. Importantly, one must first decide on an appropriate asymptotic regime. To describe large systems, which are usually of primary interest, one alternative is to consider the quality-and-efficiency-driven (QED) or Halfin-Whitt regime BIB005 BIB003 , which strikes a balance between service quality and operational efficiency. To describe a system where waiting times are long, one can focus on the Efficiency-Driven (ED) regime instead BIB006 . Analysis in the QED regime is simplified for two main reasons: (i) the system exhibits economies of scale so that, asymptotically, waiting times are negligible, and (ii) a snapshot principle BIB004 holds, under certain conditions, so that the state of the system during the waiting time of a delayed customer changes negligibly. When the system is overloaded, fluid-model approximations and ED diffusion-scale refinements perform remarkably well and are typically used to establish asymptotic accuracy. With endogenous customer response, studying implications on different objectives is not easy. A first-order issue is to decide on an appropriate objective. For example, an engineer may care about throughput, whereas an economist may care about social welfare. Moreover, different objectives may be affected by the delay information in similar ways, but not always. For example, a naive view may assume that an increase in throughput, i.e., number of served customers, must correspond to an increase in waiting times. However, this need not be the case. Indeed, real-time delay information usually allows for a better matching between supply and demand, so that we may concurrently have increased throughput and shorter waiting times. Moreover, such results tend to be intimately tied to the specific modeling assumptions made. To illustrate the complexity in this line of research, we consider the basic question: How does revealing information about the queue length, i.e., providing delay information, affect throughput and social welfare? For throughput: In the observable case, we know that revealing information would incite customers to join when the queue length is short, and deter them from joining when the queue length is long. In the unobservable case, where customers make their joining decisions based on the expected waiting time, they would join more if the system is, overall, not highly congested. Now, let us compare the observable and unobservable cases: It is not clear what the aggregate effect on throughput should be. Revealing the queue length may induce more customers to join, but if the system is, overall, lightly congested, then it may also deflect some customers who encounter an "exceptionally" long queue. The reverse argument holds when the system is heavily congested. Thus, it seems that no general statement can be made, and that the load in the system should play a role. For social welfare: We know from both Naor BIB001 and Edelson and Hilderbrand BIB002 that customers create negative externalities on other customers, and that they may join both observable and unobservable queues when it is not socially optimal for them to do so. Thus, it is not clear what the aggregate impact of revealing queue-length information on social welfare would be. In general, more complex issues should be considered, such as the granularity of the delay information (going beyond the reveal/ do not reveal dichotomy above), as well as the timing and breadth of the shared information. The literature that we survey next addresses such issues. Studying the accuracy of delay announcements, when customers respond to these announcements, is challenging. Indeed, changes in customer impatience affect system dynamics and, in turn, the future announcements made. For example, if customers abandon faster because of high announcements, then future waiting times, and future announcements which depend on those waiting times, should be shorter. Thus, studying the accuracy of the announcements involves characterizing an equilibrium in the system. At a high level, an equilibrium must correspond to the long-run performance in the system, where the average announced delay coincides with the average experienced delay. First, it is not clear whether such an equilibrium exists, or if it is unique; indeed, there may be multiple equilibria and the system may exhibit oscillations between those equilibria. Second, even when a unique equilibrium exists, it is not clear how to specify that the announcement and the corresponding delay, which are both random variables, coincide in that equilibrium, for example, this could be in expectation, in distribution, or asymptotically when scaled in an appropriate way. Third, it is not clear how stochastic fluctuations around the equilibrium affect the system's performance and the accuracy of the announcements. Even under Markovian assumptions, explicit analysis of the underlying birth-and-death process is analytically complex. This is so because the transition rates of the birth-and-death chain would all be dependent on the announcements. Therefore, analysis is typically done in an asymptotic heavy-traffic regime instead. However, establishing asymptotic accuracy is not easy, primarily because it may be that the underlying stochastic processes, for example, the queue-length process, do not even converge. Even if the underlying processes do converge, then the analysis is complicated by the state-dependent nature of the arrival and abandonment rates in the system, due to the announcements.
|
Sharing delay information in service systems: a literature survey <s> Queueing methods for delay prediction <s> This paper investigates the possibility of predicting each customer's waiting time in queue before starting service in a multiserver service system with the first-come first-served service discipline, such as a telephone call center. A predicted waiting-time distribution or an appropriate summary statistic such as the mean or the 90th percentile may be communicated to the customer upon arrival and possibly thereafter in order to improve customer satisfaction. The predicted waiting-time distribution may also be used by the service provider to better manage the service system, e.g., to help decide when to add additional service agents. The possibility of making reliable predictions is enhanced by exploiting information about system state, including the number of customers in the system ahead of the current customer. Additional information beyond the number of customers in the system may be obtained by classifying customers and the service agents to which they are assigned. For nonexponential service times, the elapsed service times of customers in service can often be used to advantage to compute conditional-remaining-service-time distributions. Approximations are proposed to convert the distributions of remaining service times into the distribution of the desired customer waiting time. The analysis reveals the advantage from exploiting additional information. <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> Queueing methods for delay prediction <s> We use heavy-traffic limits and computer simulation to study the performance of alternative real-time delay estimators in the overloaded GI/GI/s+GI multiserver queueing model, allowing customer abandonment. These delay estimates may be used to make delay announcements in call centers and related service systems. We characterize performance by the expected mean squared error in steady state. We exploit established approximations for performance measures with a nonexponential abandonment-time distribution to obtain new delay estimators that effectively cope with nonexponential abandonment-time distributions. <s> BIB002 </s> Sharing delay information in service systems: a literature survey <s> Queueing methods for delay prediction <s> We develop new improved real-time delay estimators, based on recent customer delay history, in many-server service systems with time-varying arrivals, both with and without customer abandonment. These delay estimators may be used to make delay announcements. We model the arrival process by a nonhomogeneous Poisson process, which has a deterministic time-varying arrival-rate function. Our estimators eectively cope with time-varying arrivals <s> BIB003 </s> Sharing delay information in service systems: a literature survey <s> Queueing methods for delay prediction <s> We develop new, improved real-time delay predictors for many-server service systems with a time-varying arrival rate, a time-varying number of servers, and customer abandonment. We develop four new predictors, two of which exploit an established deterministic fluid approximation for a many-server queueing model with those features. These delay predictors can be used to make delay announcements. We use computer simulation to show that the proposed predictors outperform previous predictors. <s> BIB004 </s> Sharing delay information in service systems: a literature survey <s> Queueing methods for delay prediction <s> The problem of estimating delays experienced by customers with different priorities, and the determination of the appropriate delay announcement to these customers, in a multi-class call center with time varying parameters, abandonments, and retrials is considered. The system is approximately modeled as an M(t)/M/s(t) queue with priorities, thus ignoring some of the real features like abandonments and retrials. Two delay estimators are proposed and tested in a series of simulation experiments. Making use of actual state-dependent waiting time data from this call center, the delay announcements from the estimated delay distributions that minimize a newsvendor-like cost function are considered. The performance of these announcements is also compared to announcing the mean delay. We find that an Erlang distribution-based estimator performs well for a range of different under-announcement penalty to over-announcement penalty ratios. <s> BIB005 </s> Sharing delay information in service systems: a literature survey <s> Queueing methods for delay prediction <s> We are interested in predicting the wait time of customers upon their arrival in some service system such as a call center or emergency service. We propose two new predictors that are very simple to implement and can be used in multiskill settings. They are based on the wait times of previous customers of the same class. The first one estimates the delay of a new customer by extrapolating the wait history (so far) of customers currently in queue, plus the last one that started service, and taking a weighted average. The second one takes a weighted average of the delays of the past customers of the same class that have found the same queue length when they arrived. In our simulation experiments, these new predictors are very competitive with the optimal ones for a simple queue, and for multiskill centers they perform better than other predictors of comparable simplicity. <s> BIB006
|
Because there is no universal "most accurate" predictor, i.e., one which performs well in all queueing contexts, Whitt BIB001 systematically explores alternative ways of predicting waiting times in a multi-server queueing model with multiple classes, under certain distributional assumptions, by exploiting various levels of system-state information. The types of information considered involve the queue length, individual customer abandonment and service rates, remaining service times of customers in service, etc. Full cumulative distribution functions of customer waiting times are estimated in each case, through either exact analysis or approximations. Following up on Whitt BIB001 , in a series of papers Ibrahim and Whitt BIB002 BIB003 BIB004 investigate the asymptotic accuracy of alternative real-time delay announcements, based on either the queue length or the history of delays, in queueing systems with several realistic features, such as time-varying arrivals and general distributional assumptions. The predictors that Ibrahim and Whitt consider are all single-number estimates, for example, the mean of the wait-time distribution conditional on the queue length seen, or LES. For the most part, they consider the MSE criterion for accuracy and rely on a many-server heavy-traffic framework to: (i) derive approximations for MSEminimizing conditional expected wait-time values, given system-state information, which serve as new announcements, and (ii) quantify the accuracy of the various announcements considered. They substantiate their theoretical results with an extensive simulation study and formulate general insights on the usefulness and limitation of each type of delay prediction. Ibrahim and Whitt focus solely on single-class systems. The performance of LES in multi-class systems is considered numerically in Thiongane et al. BIB006 : The authors use simulation to explore the accuracy of the LES predictor in the context of a Markovian multi-server, multi-class system with abandonment. They explore the accuracy of LES-based announcements, including the weighted average of LES predictions, as well as predictors exploiting both the queue length and the LES delay. Bassamboo and Ibrahim study the performance of LES with multiple classes as well and provide theoretical support to some of the numerical observations in Thiongane et al. BIB006 . Nakibly also considers a multi-class context and allows for heterogeneous, class-dependent service rates. She considers both exact and approximate methods. For example, in a two-server queueing system with a non-preemptive priority discipline and exponential class-dependent service times, she describes the waiting-time distribution using difference equations and a matrix geometric method. She also considers an iterative algorithm to approximate that distribution in more complex models with multiple priorities and many servers. For an alternative measure of accuracy, Jouini et al. BIB005 consider a newsvendor problem cost function instead, which allows penalization of overestimation and underestimation of delays using different cost parameters. They consider a multi-class queue with a priority service discipline and time-varying arrival rates. They empirically validate their theoretical results using data from a network of real-life call centers. In such a network, determining the number of servers available at every time epoch is difficult to do. In a system with both time-variations and an unknown number of servers, they propose simple approximations for wait-time moments. They consider approximating the corresponding wait-time distributions by using Erlang and Normal distributions with those matched moments and find optimal announcements from these distributions. Finally, they take their results to data: They quantify the performance of their predictors, along with a benchmark mean-delay predictor. They find that Erlang-based predictions have a superior performance.
|
Sharing delay information in service systems: a literature survey <s> Data-based methods for delay prediction <s> Using a heavy traffic limit theorem for open queueing networks, we find the correct diffusion approximation (D.A.) for sojourn times in Jackson networks with single server stations. The D.A. for sojourn times is a function of the D.A. for the queue length process, which is reflected Brownian motion on the nonnegative orthant. <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> Data-based methods for delay prediction <s> Information systems have been widely adopted to support service processes in various domains, e.g., in the telecommunication, finance, and health sectors. Recently, work on process mining showed how management of these processes, and engineering of supporting systems, can be guided by models extracted from the event logs that are recorded during process operation. In this work, we establish a queueing perspective in operational process mining. We propose to consider queues as first-class citizens and use queueing theory as a basis for queue mining techniques. To demonstrate the value of queue mining, we revisit the specific operational problem of online delay prediction: using event data, we show that queue mining yields accurate online predictions of case delay. <s> BIB002 </s> Sharing delay information in service systems: a literature survey <s> Data-based methods for delay prediction <s> An unsuitable patient flow as well as prolonged waiting lists in the emergency room of a maternity unit, regarding gynecology and obstetrics care, can affect the mother and child's health, leading to adverse events and consequences regarding their safety and satisfaction. Predicting the patients' waiting time in the emergency room is a means to avoid this problem. This study aims to predict the pre-triage waiting time in the emergency care of gynecology and obstetrics of Centro Materno Infantil do Norte CMIN, the maternal and perinatal care unit of Centro Hospitalar of Oporto, situated in the north of Portugal. Data mining techniques were induced using information collected from the information systems and technologies available in CMIN. The models developed presented good results reaching accuracy and specificity values of approximately 74i¾?% and 94i¾?%, respectively. Additionally, the number of patients and triage professionals working in the emergency room, as well as some temporal variables were identified as direct enhancers to the pre-triage waiting time. The implementation of the attained knowledge in the decision support system and business intelligence platform, deployed in CMIN, leads to the optimization of the patient flow through the emergency room and improving the quality of services. <s> BIB003 </s> Sharing delay information in service systems: a literature survey <s> Data-based methods for delay prediction <s> Information systems have been widely adopted to support service processes in various domains, e.g., in the telecommunication, finance, and health sectors. Information recorded by systems during the operation of these processes provides an angle for operational process analysis, commonly referred to as process mining. In this work, we establish a queueing perspective in process mining to address the online delay prediction problem, which refers to the time that the execution of an activity for a running instance of a service process is delayed due to queueing effects. We present predictors that treat queues as first-class citizens and either enhance existing regression-based techniques for process mining or are directly grounded in queueing theory. In particular, our predictors target multi-class service processes, in which requests are classified by a type that influences their processing. Further, we introduce queue mining techniques that derive the predictors from event logs recorded by an information system during process execution. Our evaluation based on large real-world datasets, from the telecommunications and financial sectors, shows that our techniques yield accurate online predictions of case delay and drastically improve over predictors neglecting the queueing perspective. <s> BIB004 </s> Sharing delay information in service systems: a literature survey <s> Data-based methods for delay prediction <s> This paper proposes the Q-Lasso method for wait time prediction, which combines statistical learning with fluid model estimators. In historical data from four remarkably different hospitals, Q-Lasso predicts the emergency department ED wait time for low-acuity patients with greater accuracy than rolling average methods currently used by hospitals, fluid model estimators from the service operations management literature, and quantile regression methods from the emergency medicine literature. Q-Lasso achieves greater accuracy largely by correcting errors of underestimation in which a patient waits for longer than predicted. Implemented on the external website and in the triage room of the San Mateo Medical Center SMMC, Q-Lasso achieves over 30% lower mean squared prediction error than would occur with the best rolling average method. The paper describes challenges and insights from the implementation at SMMC. <s> BIB005
|
There has been recent interest in using data-mining techniques for delay prediction in service systems. There are several papers which focus solely on data-mining methods for wait-time prediction, for example, in healthcare settings BIB003 , or transportation systems . In contrast, we focus here on papers which emphasize the importance of combining both queueing-theoretic and data-mining methods. Senderovich et al. BIB002 BIB004 introduce a novel framework which combines processmining techniques, machine-learning algorithms, and queueing-theoretic results to predict waiting times in service queues. Single-class systems are considered in Senderovich et al. BIB002 , and multi-class systems in Senderovich et al. BIB004 . The authors consider various predictors, including delay-history-based predictors, such as LES. Such predictors are termed "snapshot" predictors because their asymptotic accuracy in certain queueing contexts is substantiated by Reiman's snapshot principle BIB001 . They also consider two average predictors, one which averages over the entire history of delays, and another which clusters waits according to k loads, using k-means clustering. In general, snapshot predictors are found to be accurate in single-class settings, consistently outperforming average predictors. In a multi-class setting, snapshot predictors and regression-based methods yield good performance. Senderovich et al. BIB002 BIB004 are based on the analysis of call center data. Senderovich et al. focus on a healthcare setting instead. In particular, the authors rely on predictors which combine patient information, for example, previous visits and other related information, with real-time congestion measures, such as the current number of patients and recent lengths of stay. The proposed prediction method is shown to have superior performance. Ang et al. BIB005 also consider a healthcare setting and use data sets from four hospitals. They, too, emphasize a message similar to Senderovich et al. BIB002 BIB004 : Combining queueing-theoretic results with data-mining techniques leads to superior predictive performance. They introduce a novel estimation method, Q-Lasso, which is inspired by both queueing theory and the Lasso method of statistical learning. In particular, they consider a queue-length-based predictor, which is equal to the ratio of the queue length to the processing rate, as a covariate in the Q-Lasso method. The authors find that the Q-Lasso method consistently outperforms other prediction methods such as rolling-average methods. The authors also implement their method in a hospital and discuss related implementation challenges.
|
Sharing delay information in service systems: a literature survey <s> Preliminaries: the classical framework <s> SOME DISCUSSION has arisen recently as to whether the imposition of an "entrance fee" on arriving customers who wish to be serviced by a station and hence join a waiting line is a rational measure. Not much of this discussion has appeared in print; indeed this author is aware of only three short communications, representing an exchange of arguments between Leeman [1, 2] and Saaty [3]. The ideas advanced there were of qualitative character and no attempt was made to quantify the arguments. The problem under consideration is obviously analogous to one that arises in connection with the control of vehicular traffic congestion on a road network. It has been argued2 by traffic economists that the individual car driver on making an optimal routing choice for himself-does not optimize the system at large. The purpose of this communication is to demonstrate that, indeed, analogous conclusions can be drawn for queueing models if two basic conditions are satisfied: <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> Preliminaries: the classical framework <s> The relationship between Pareto optimal (0s) and revenue maximizing (Or) tolls is examined for queuing models that permit balking. When customers have the same value for waiting time, Q, =Or provided the entrepreneur can impose a simple two-part tariff. With heterogeneous values for waiting time, Or can be greater than, equal to, or less than H,. Expanding the number of servers and charging multi-part tariffs are shown to be alternative methods for segmenting the market, and the welfare implications of these two strategies are explored. <s> BIB002
|
The classical queueing system is an M/M/1 model. The first-come-first-served (FCFS) discipline is considered, and there is unlimited waiting space. Arrivals are according to a Poisson arrival process with rate λ, service times are independent and identically distributed (i.i.d.) with rate μ, and there is a single server. Customers are delay sensitive, and we let C denote the waiting cost per time unit for a customer (which is assumed to be paid when the customer enters service). Customers also receive a reward R from service. In Naor BIB001 , a customer inspects, upon arrival, the queue length (number of customers in the system) and decides whether to join or balk. An individual joins a queue of size i if, and only if, her expected utility R − C(i+1) μ ≥ 0. The equilibrium joining strategy, i.e., individual optimizing strategy, is a threshold-based strategy where customers who observe n customers in queue upon arrival join if, and only if, n + 1 ≤ n e , where n e ≡ Rμ/C . The social benefit, per unit of time, assuming a threshold joining strategy with threshold n is given by λ(1 − p n )R − Cq, where p n is the stationary probability of finding n in the system, given a maximum queue length of n, and q is the expected queue length. A pure threshold socially optimal strategy exists, and Naor BIB001 shows that the social benefit attains its maximum at a value n s ≤ n e . Rooted in this classical result, a general theme in the queueing-games literature is that the selfish behavior of utility-maximizing customers leads to sub-optimal equilibrium solutions compared to the socially optimal solution. The aim is then to investigate ways of restoring the imbalance. In Naor's framework, by imposing an appropriate admission fee, i.e., a static, queue-length independent price, θ , customers can be motivated to adopt the threshold n s instead of n e . The toll may also be set from a revenue maximizer's objective, i.e., to maximize λ(1 − p n )θ . In this case, the fee levied by the manager is too high, i.e., n r ≤ n s ≤ n e , where n r is the corresponding equilibrium threshold. Edelson and Hilderbrand BIB002 consider the basic unobservable model, where customers do not observe the queue length upon arrival, and make joining decisions based on the expected waiting time. Customers may either join the queue, not join, or adopt a mixed strategy where they join with probability q. It is found that a unique equilibrium strategy exists, and that it is based on the value of R: If R is "low," then no customer joins; if R is intermediate, then customers adopt a mixed strategy with joining probability q e ≡ μC/R; and, if R is large, then everyone joins. The social benefit function attains its maximum at a value q soc such that q soc ≤ q e . Thus, as in the observable case, individual optimization leads to queues that are longer than socially desired, but the gap can be corrected by imposing an appropriate admission fee. We note that the objectives of a profit maximizer and the social planner coincide.
|
Sharing delay information in service systems: a literature survey <s> To reveal or not to reveal? Observable versus unobservable queues <s> It takes time to process purchases and as a result a queue of customers may form. The pricing and capacity (service rate) decision of a monopolist who must take this into account are characterized. We find that an increase in the average number of customers arriving in the market either has no effect on the price, or else causes the firm to reduce the price in the short run. In the long run the firm will increase capacity and raise the price. When customer preferences are linear, the equilibrium is socially efficient. When preferences are not linear, the equilibrium will not normally be socially efficient. <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> To reveal or not to reveal? Observable versus unobservable queues <s> We consider an M/M/1 queueing system in which the queue length may or may not be observable by a customer upon entering the system. The “observable” and “unobservable” models are compared with respect to system properties and performance measures under two different types of optimal customer behavior, which we refer to as “selfishly optimal” and “socially optimal”. We consider average customer throughput rates and show that, under both types of optimal customer behavior, the equality of effective queue-joining rates between the observable and unobservable systems results in differences with respect to other performance measures such as mean busy periods and waiting times. We also show that the equality of selfishly optimal queue-joining rates between the two types of system precludes the equality of socially optimal joining rates, and vice versa. <s> BIB002 </s> Sharing delay information in service systems: a literature survey <s> To reveal or not to reveal? Observable versus unobservable queues <s> We investigate the impact of delay announcements on the coordination within hospital networks using a combination of empirical observations and numerical experiments. We offer empirical evidence th... <s> BIB003
|
We begin by surveying papers which compare the observable and unobservable systems, i.e., address whether or not to reveal queue-length information. Social welfare, revenue maximization, and throughput. Hassin studied the impact of information suppression from both the social planner's and revenue maximizer's perspectives. In both cases, two quantities play a central role: (i) the potential arrival rate, λ, and (ii) the value of service, relative to the cost of waiting, ν s ≡ Rμ/C. Hassin compares profits, under profit-maximizing admission fees, in the observable and unobservable cases. He finds that if customers are "very" sensitive to delay, ν s ≤ 2, i.e., C ≥ Rμ/2, then it is optimal to reveal the queue length for all λ > 0. However, if customers are not very delay sensitive, ν s > 2, then there exists a threshold, Λ R , such that it is only optimal to reveal the queue length for λ > Λ R . The intuition behind these results is as follows: When λ is large, many customers would opt to balk based on average wait-time information, which is high because λ is high. In this case, disclosing the queue-length information encourages more customers to join in low-congestion states. While it is true that it also discourages customers from joining highly congested states, the key is that these customers would have balked anyway in the unobservable case; thus, revealing information helps the firm. We now turn to the social welfare results. First, we note that the problem would be straightforward if a social welfare maximizing fee can be imposed. In this case, revealing delay information can only help the social planner since, in the observable case, a customer would enter only when it is socially desirable to do so, but this is not the case in the unobservable model. The more challenging case is when pricing cannot be socially controlled, for example, because price regulation is not desirable, but information suppression can be socially controlled. Under the assumption of a revenue-maximizing toll, the values of ν s and λ play similar roles, but the threshold on λ, Λ S , is different and it is shown that Λ S < Λ R . Thus, a social planner may want to reveal the queue length when it is not optimal for a revenue maximizer to do so, i.e., for Λ S < λ < Λ R . However, it is never optimal to suppress information when a revenue maximizer voluntarily chooses to reveal it, i.e., for λ > Λ R . Chen and Frank BIB001 study how information suppression impacts throughput. Intuitions similar to the ones in Hassin continue to apply, so we will be brief. In particular, for a fixed admission fee, the role played by the system's load is prominent. On the one hand, if the arrival rate is low, in particular λ < Λ * , then customers may be turned away by real-time queue-length information, while they would have joined with a (low) average wait-time information. This implies that λ O < λ U , i.e., the effective joining rate is smaller in the observable system than in the unobservable system. On the other hand, if the arrival rate is high, in particular λ > Λ * , then λ O > λ U . Shone et al. BIB002 take a different view and focus on the situation where the decision of a service provider to reveal the queue-length information does not affect throughput. Shone et al. BIB002 assume out the possibility of optimizing the admission fee. They compare the observable and unobservable systems in terms of joining rates, both individually optimal (selfish) and socially optimal (altruistic), in addition to various other system performance measures. The authors derive necessary and sufficient conditions for the equality of equilibrium selfish and altruistic joining rates between the observable and unobservable systems and show that both equalities cannot simultaneously hold. Shone et al. BIB002 also observe that the decision of whether or not to reveal the queue length depends strongly on ν s , as was observed in Chen and Frank BIB001 . A network of providers. The papers above focus on a setting with a single provider. Singh et al. consider a competitive environment with two service providers instead. These providers may choose to diffuse different levels of information, either real-time or historical. The paper studies the first mover's benefit, i.e., the first provider to announce real-time information. It considers two parallel M/M/1 queues, in a multiperiod setting, where one provider announces the real-time queue length, and the other provider announces the expected delay of the previous period. For a performance comparison, the authors consider the market share and the expected delay, and customers join the lower-delay alternative. The authors find that the benefit of being the first mover depends on the service capacity. In particular, for the lower-capacity provider, being the initiator in announcing real-time information increases the market share and reduces delays. However, the same does not hold for the higher-capacity provider, where results are mixed. The authors also find that social welfare always increases when there is benefit on market share and delay. Dong et al. BIB003 also consider a network setting with multiple providers, but they focus on a network of hospitals instead. In particular, they study, in the context of an empirical investigation, the impact of delay announcements on coordination in the network. Coordination is measured through the correlations of delays between hospitals: There is synchronization if those correlations are positive. This observation is rooted in a queueing-theoretic result which establishes that the join-the-shortest-queue (JSQ) discipline synchronizes queues in the system. Indeed, if customers check the delay information, then it is reasonable to assume that they would join the shortest queue, which would then lead to synchronization. Thus, exploring the impact of delay information reduces to studying correlations between the waiting times at adjacent hospitals. By relying on data of real-life announcements and patient response (measured through online searches), the authors investigate whether the announcements do indeed impact the behavior of patients. They provide empirical evidence that this is indeed the case. They also conduct an extensive numerical study to investigate how sensitivity of customers to delays, the load of the system, and the heterogeneity between hospitals impact the synchronization level in the system. They show that using average wait predictors may lead to oscillations in the system, where customers systematically flock to one of the two queues; this numerical observation is studied in Pender et al. .
|
Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> SOME DISCUSSION has arisen recently as to whether the imposition of an "entrance fee" on arriving customers who wish to be serviced by a station and hence join a waiting line is a rational measure. Not much of this discussion has appeared in print; indeed this author is aware of only three short communications, representing an exchange of arguments between Leeman [1, 2] and Saaty [3]. The ideas advanced there were of qualitative character and no attempt was made to quantify the arguments. The problem under consideration is obviously analogous to one that arises in connection with the control of vehicular traffic congestion on a road network. It has been argued2 by traffic economists that the individual car driver on making an optimal routing choice for himself-does not optimize the system at large. The purpose of this communication is to demonstrate that, indeed, analogous conclusions can be drawn for queueing models if two basic conditions are satisfied: <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> The relationship between Pareto optimal (0s) and revenue maximizing (Or) tolls is examined for queuing models that permit balking. When customers have the same value for waiting time, Q, =Or provided the entrepreneur can impose a simple two-part tariff. With heterogeneous values for waiting time, Or can be greater than, equal to, or less than H,. Expanding the number of servers and charging multi-part tariffs are shown to be alternative methods for segmenting the market, and the welfare implications of these two strategies are explored. <s> BIB002 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> We consider the problem of quoting customer lead times in a manufacturing environment under a variety of modeling assumptions. First, we examine the case where capacity is infinite. For this case, we derive a closed-form expression for the optimal lead time quote. Second, we consider the case where capacity is finite and the firm processes jobs in first-come-first-served (FCFS) order. We prove the optimality of different forms of control limit policies for the situations where the lead time is dictated by the market and where firms are able to compete on the basis of lead time. Finally, we consider the case where the firm may choose to schedule jobs in other than FCFS order and give conditions under which the optimal due-date-quoting/order-scheduling policy will process jobs in earliest due date (EDD) order. <s> BIB003 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> It takes time to process purchases and as a result a queue of customers may form. The pricing and capacity (service rate) decision of a monopolist who must take this into account are characterized. We find that an increase in the average number of customers arriving in the market either has no effect on the price, or else causes the firm to reduce the price in the short run. In the long run the firm will increase capacity and raise the price. When customer preferences are linear, the equilibrium is socially efficient. When preferences are not linear, the equilibrium will not normally be socially efficient. <s> BIB004 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> The widespread adoption of Enterprise Resource Planning (ERP) systems has, among many other benefits, increased the ability of a firm to share operational data with customers. In this paper we analyze the factors that determine whether or not sharing a specific type of information, namely state-dependent lead time information, can benefit a firm. We develop a stochastic model of a custom-production environment, in which customers are handled on a first-come first-served basis but have differing tolerances for waiting. The firm has the option to share different amounts of information about the lead time a potential customer may incur. Although the information differs across scenarios, the reliability of that information in terms of the probability that a stated lead time is met is equal in the eyes of the customers. We derive conditions under which sharing more information with customers improves the firm's profits and the customers' experiences. We show that it is not always the case that sharing information improves the lot of the firm. We show that when customers' tolerances for waiting are more heterogeneous then the benefit to the firm from sharing lead time information increases. Our conclusion is that management should only authorize sharing detailed lead time information, be it through information system integration or frontline sales people, after a careful analysis of a customer's sensitivity to delay. <s> BIB005 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> Information about delays can enhance service quality in many industries. Delay information can take many forms, with different degrees of precision. Different levels of information have different effects on customers and therefore on the overall system. To explore these effects, we consider a queue with balking under three levels of delay information: no information, partial information (the system occupancy), and full information (the exact waiting time). We assume Poisson arrivals, independent exponential service times, and a single server. Customers decide whether to stay or balk based on their expected waiting costs, conditional on the information provided. We show how to compute the key performance measures in the three systems, obtaining closed-form solutions for special cases. We then compare the three systems. We identify some important cases where more accurate delay information improves performance. In other cases, however, information can actually hurt the provider or the customers. <s> BIB006 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> We consider a single server Markovian queue with setup times. Whenever this system becomes empty, the server is turned off. Whenever a customer arrives to an empty system, the server begins an exponential setup time to start service again. We assume that arriving customers decide whether to enter the system or balk based on a natural reward-cost structure, which incorporates their desire for service as well as their unwillingness to wait. ::: ::: We examine customer behavior under various levels of information regarding the system state. Specifically, before making the decision, a customer may or may not know the state of the server and/or the number of present customers. We derive equilibrium strategies for the customers under the various levels of information and analyze the stationary behavior of the system under these strategies. We also illustrate further effects of the information level on the equilibrium behavior via numerical experiments. <s> BIB007 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> This article generalizes the models in Guo and Zipkin, who focus on exponential service times, to systems with phase-type service times. Each arriving customer decides whether to stay or balk based on his expected waiting cost, conditional on the information provided. We show how to compute the throughput and customers' average utility in each case. We then obtain some analytical and numerical results to assess the effect of more or less information. We also show that service-time variability degrades the system's performance. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008 <s> BIB008 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> We consider the Markovian single-server queue that alternates between on and off periods. Upon arriving, the customers observe the queue length and decide whether to join or balk. We derive equilibrium threshold balking strategies in two cases, according to the information for the server's state. <s> BIB009 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> We consider the single server Markovian queue and we assume that arriving customers decide whether to enter the system or balk based on a natural reward-cost structure, which incorporates their desire for service as well as their unwillingness to wait. ::: ::: We suppose that the waiting space of the system is partitioned in compartments of fixed capacity for a customers. Before making his decision, a customer may or may not know the compartment in which he will enter and/or the position within the compartment in which he will enter. Thus, denoting by n the number of customers found by an arriving customer, he may or may not know ? n/a ?+1 and/or (n mod a)+1. ::: ::: We examine customers' behavior under the various levels of information regarding the system state and we identify equilibrium threshold strategies. We also study the corresponding social and profit maximization problems. <s> BIB010 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> We consider two balking queue models with different types of information about delays. Potential customers arrive according to a Poisson process, and they decide whether to stay or balk based on the available delay information. In the first model, an arriving customer learns a rough range of the current queue length. In the second model, each customer's service time is the sum of a geometric number of i.i.d. exponential phases, and an arriving customer learns the total number of phases remaining in the system. For each information model, we compare two systems, identical except that one has more precise information. In many cases, better information increases throughput and thus benefits the service provider. But this is not always so. The effect depends on the shape of the distribution describing customers' sensitivities to delays. We also study the effects of information on performance as seen by customers. Again, more information is often good for customers, but not always. <s> BIB011 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> Congestion and its uncertainty are big factors affecting customers’ decision to join a queue or balk. In a queueing system, congestion itself is resulted from the aggregate joining behavior of other customers. Therefore, the property of the whole group of arriving customers affects the equilibrium behavior of the queue. In this paper, we assume each individual customer has a utility function which includes a basic cost function, common to all customers, and a customer-specific weight measuring sensitivity to delay. We investigate the impacts on the average customer utility and the throughput of the queueing system of different cost functions and weight distributions. Specifically, we compare systems where these parameters are related by various stochastic orders, under different information scenarios. We also explore the relationship between customer characteristics and the value of information. <s> BIB012 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> Abstract We consider simple parallel queueing models in which a proportion of arriving customers are flexible, i.e. they are willing to receive service at any one of some subset of the parallel servers. For the case of two parallel servers, we show that as the servers become fully utilized, the maximum improvement in mean waiting times is achieved for arbitrarily small levels of flexibility. The insights from this analytic model are supported by simulation results that show that large gains can be made with low levels of flexibility. The potential implications of these results for two motivating examples are discussed. <s> BIB013 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> The equilibrium threshold balking strategies are investigated for the fully observable and partially single-server queues with server breakdowns and delayed repairs. Upon arriving, the customers observe the queue length and status of the server and decide whether to join or balk the queue based on these information, along with the waiting cost and the reward after finishing their service. By using queueing theory and cost analysis, we obtain the stationary distribution of queue size of the queueing systems under consideration and provide algorithms in order to identify the equilibrium strategies for the fully and partially observable model. Finally, the equilibrium threshold balking strategies are derived for the fully observable system and partially observable system respectively, both with server breakdowns and delayed repairs. <s> BIB014 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> Abstract In many service systems arising in OR/MS applications, the servers may be temporarily unavailable, a fact that affects the sojourn time of a customer and his willingness to join. Several studies that explore the balking behavior of customers in Markovian models with vacations have recently appeared in the literature. In the present paper, we study the balking behavior of customers in the single-server queue with generally distributed service and vacation times. Arriving customers decide whether to enter the system or balk, based on a linear reward–cost structure that incorporates their desire for service, as well as their unwillingness to wait. We identify equilibrium strategies and socially optimal strategies under two distinct information assumptions. Specifically, in a first case, the customers make individual decisions without knowing the system state. In a second case, they are informed about the server’s current status. We examine the influence of the information level on the customers’ strategic response and we compare the resulting equilibrium and socially optimal strategies. <s> BIB015 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> Many service providers use delay announcements to inform customers of anticipated delays. However, this information is usually not provided immediately but after a short period of time (spent either waiting or occupied by the system). The focus of this paper is on the impact of this postponement on the ability of the firm to influence customer behavior by communicating nonverifiable congestion information to its customers, as well as on the profits and utilities for the firm and the customers, respectively. We show that this postponement can actually help the firm create credibility and augment the resulting equilibrium. However, in other settings this delay can also detract from the resulting equilibrium. Furthermore, we show that whenever credibility is created it improves not only the profit for the firm but also the customers' overall utility under certain settings. <s> BIB016 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> We consider both cooperative as well as non-cooperative admission into an M/M/1 queue. The only information available is a signal that says whether the queue size is smaller than some L or not. We first compute the globally optimal and the Nash equilibrium stationary policy as a function of L. We compare the performance to that of full information on the queue size. We identify the L that optimizes the equilibrium performance. <s> BIB017 </s> Sharing delay information in service systems: a literature survey <s> Granularity, timing, and breadth of the delay information <s> We investigate the impact of delay announcements on the coordination within hospital networks using a combination of empirical observations and numerical experiments. We offer empirical evidence th... <s> BIB018
|
The papers above consider either full revelation or no revelation of real-time systemstate information. However, there are other considerations, such as the timing, granularity, and breadth of the shared delay information. We now survey papers which study decisions pertaining to those characteristics. "Discrete" information: High and low announcements. The idea that full information may not be necessary, and that a discrete high-low type of announcement may suffice, already follows immediately from Naor BIB001 . Indeed, in the observable case, customers follow a threshold-type joining decision; this indicates that only the information on whether or not the queue length exceeds a threshold, L, should suffice. Because this information structure is much simpler, there is interest in studying it. We note that setting L = 0 corresponds to the unobservable model in Edelson and Hilderbrand BIB002 . Altman and Jimenez BIB017 consider high-low announcements when there is no pricing decision. First, the authors consider that the value of L is fixed (not necessarily at optimum). In the social planner problem, they optimize the probabilities of accepting an arrival if the queue length is below or above L. Next, they consider the individual optimization problem where utility-maximizing customers make their joining decisions, and investigate the ensuing equilibrium. In both problems, the optimal admission strategy has the form of either accepting all arrivals when the queue length is below L, or rejecting all arrivals when it is above L. The authors also show that imposing a socially optimal L value in the individual optimization problem does not lead to the socially optimal outcome. Hassin and Koshman consider a similar setting as in Altman and Jimenez BIB017 , albeit with pricing decisions. In particular, customers are charged p L when the queue length is below L, and p H otherwise. Hassin and Koshman demonstrate how to obtain the maximum value of social welfare in Naor's model by using their coarse dynamic pricing scheme. The above two-signal strategy arises at equilibrium in Allon et al. . In this paper, the authors relax two fundamental assumptions: (i) that the firm is truth-telling in revealing information, and (ii) that the information shared is quantifiable and verifiable by customers. As such, they allow for a richer information set which also includes intentional vagueness: A firm is intentionally vague when it provides the same announcement in different states of the system. They show that even though the information provided to customers is non-verifiable, it can improve the profits of the firm and the expected utility of customers. The incentives of the firm and its customers are neither perfectly misaligned (they both prefer shorter waits), nor perfectly aligned (the firm benefits from higher throughput, whereas the customers do not). This misalignment between the firm and its customers plays a key role in the analysis: Depending on its level, different equilibria emerge. Of particular interest are equilibria with influential cheap talk, i.e., ones where the firm can induce distinct customer actions based on different unverifiable messages. Different levels of information. We now turn to the literature investigating the problem of finding the "best" type of delay information to share. Duenyas and Hopp BIB003 investigate that problem in a manufacturing setting. Each customer who places an order generates a reward for the firm, and there is a penalty for being late (per unit time exceeding the quoted lead time). In response to a quoted lead time, a, each customer places an order with probability p(a). Duenyas and Hopp BIB003 derive an optimal quote which maximizes the expected profit (revenue minus penalty cost), under both infinite (G/G/∞) and finite (G/G/1) capacity settings. In the infinite-capacity case, the optimal quote does not depend on the current backlog in the system. In the finite-capacity alternative, the optimal lead-time quoting policy is state-dependent and increasing in the state, i.e., the higher the congestion, the higher the lead-time quote. Specifically, a profit-maximizing firm should give granular, state-dependent information rather than rely on a coarse information-sharing scheme. In their model, Duenyas and Hopp BIB003 trade the reliability of the quoted delay for maximizing throughput: While there is a penalty for being late, the firm is not, otherwise, restricted in the quote that it provides, i.e., it is not constrained to being reliable. In contrast, Dobson and Pinker BIB005 consider a similar problem but assume that the firm must provide reliable quotes: The state-dependent lead-time quote provided, l i , depends on the number i of customers in the system and is a fractile from the conditional wait-time distribution which must be met (100τ )% of the time. In other words, letting W i denote the conditional steady-state waiting time, we must have that P(W i ≤ l i ) = τ . The proportion of customers who join the system, in response to l i , is given by α(l i , τ ). Dobson and Pinker BIB005 compare alternative scenarios, S k , which reflect different levels of information granularity: For scenario S k , customers are provided with a state-dependent announcement l i for i < k, and with a static announcement for i ≥ k. Increasing k amounts to increasing the granularity of the delay information. The authors derive a sufficient condition under which sharing more information increases throughput, and emphasize that this need not always be the case. Importantly, they demonstrate that higher throughput may also be associated with lower expected waiting times, and less variable waits, because the delay information deters customers from joining highly congested states, and encourages customers to join low-congestion states. They also highlight the importance of customer heterogeneity, i.e., the extent to which different information granularity leads to different demand rates: The greater the heterogeneity, the higher the throughput, i.e., the higher the value that can be derived from quoting lead times. The role played by customer heterogeneity is also central in the work of Guo and Zipkin. Guo and Zipkin BIB006 consider three levels of information: (1) no information, (2) partial information, i.e., queue length upon arrival, and (3) full information, i.e., exact waiting time. For performance measures, they consider throughput and the expected customer utility. Customers are assumed to be heterogeneous in their delay costs. Specifically, each arriving customer has a cost type, θ , which is drawn from a continuous and bounded distribution, H , and density function, h. There is also a basic cost function, c(w), associated with a wait w. Thus, the cost incurred by a θ -customer who is delayed for w is equal to θ c(w). Different levels of delay information incite more or fewer customers to join. The information provided also segments customers depending on their delay sensitivity: A customer who joins under one type of information may balk under another type. Guo and Zipkin BIB006 demonstrate that both system throughput and customer utility, under different information levels, are impacted by the shape of the customer delay distribution. Depending on that distribution, they characterize conditions under which information helps either the customers or the service provider. The main takeaway is that more information may or may not be beneficial, depending on the distribution of customer delay sensitivity. In subsequent papers, the above results are generalized to systems with phase-type service times BIB008 , different levels of information BIB011 , and alternative cost functions BIB012 . In a series of papers, Burnetas and Economou BIB007 , Economou and Kanta BIB009 BIB010 , and Economou et al. BIB015 , the authors quantify the impact of state information on system dynamics under various assumptions. Burnetas and Economou BIB007 consider an M/M/1 queue with setup times. In particular, when a new customer arrives to an empty system, the server requires an exponentially distributed time with rate θ before beginning service. At time t, the state of the system is described by the pair (N (t), I (t) ), where N (t) is the number of customers in the system and I (t) = 0 or 1 is the state of server (idle or busy, respectively). Customers may be exposed to different levels of information about the system, corresponding to four cases: (i) fully observable, where customers observe both N (t) and I (t); (ii) almost observable, where customers observe only N (t); (iii) almost unobservable, where customers observe only I (t); (iv) fully unobservable, where customers do not observe either I (t) or N (t) . In all cases, customer equilibrium strategies are analyzed, as well as the stationary behavior in the system and the social benefit for all customers. Economou et al. BIB015 consider an extension of Burnetas and Economou BIB007 where both general service and general setup times are allowed. Economou and Kanta BIB010 assume that the waiting space is divided into compartments, to be served sequentially in increasing order, and joining customers may know either the compartment number (but not their position in the compartment that they join) or their position within a compartment (but not the compartment number). Both information levels correspond to partial information since customers do not fully observe the system state in either case. For a frame of reference, if a customer knows both the compartment index and the compartment position, then the model reduces to the model in Naor BIB001 , whereas if neither are known then the model reduces to the model in Edelson and Hilderbrand BIB002 . Economou and Kanta BIB009 and Wang and Zhang BIB014 assume that the server may break down and require repair. The time to repair is considered to be equal to 0 in the former and is exponentially distributed in the latter. The authors in those two papers compare two levels of information: (i) fully observable, where customers know both the queue length and the state of the server, and (ii) partially observable, where customers know only the queue length. Both papers compare equilibrium threshold balking strategies in their contexts. Timing and breadth. The question of when to make a delay announcement, and the extent to which information should be shared, have also been investigated in the literature. He and Down BIB013 rely on both heavy-traffic analysis and simulation to study performance in a queueing system where only a fraction of customers are informed about waiting times. Specifically, they consider two customer classes and two server pools. Dedicated customers in each class can only be served by one of the two pools, for example, because of a language requirement. A fraction of customers is flexible and may choose one of the two server pools depending on which has the shortest queue. He and Down BIB013 focus on the expected waiting time for both classes and demonstrate that "a little flexibility goes a long way" in that delay information (the queue length) significantly improves performance even when a small proportion of customers are informed about waiting times. They also address the question of information updating by considering, numerically, a setting where the mean waiting time is updated periodically, and customers use the most recent update in making their joining decisions. They show that there could be significant degradation in performance if the delay information is not updated frequently enough, and the system may experience oscillation behavior because customers herd together for one queue for a period of time. Hu et al. also address the question of the breadth of the information shared. They consider a setting where only a fraction of customers are informed about the queue length in the system. Informed customers make their joining decisions based on the observed queue length. Uninformed customers make their joining decisions based on the expected waiting time in the system. The fraction of informed customers is assumed to be exogenous. Informed customers join the system in accordance with the threshold joining policy in an observable queue, as in Naor BIB001 . Uninformed customers randomize their joining decisions. Uninformed customers indirectly influence informed customers by influencing the distribution of the queue length in the system. The authors find that, in systems which are not under very low loads, informing a fraction of customers about real-time delay information increases either the throughput or the social welfare. Their results depend on both the offered load in the system and the joining behavior of uniformed customers. To relate their results to Chen and Frank BIB004 : They find that when the offered load is low enough, throughput decreases with the information. Similarly, if the offered load is high enough, then throughput increases with the information. However, in the intermediate region for the offered load, throughput is maximal if only a fraction of customers are informed. Also, while the standard view, as in Hassin , is that social welfare is always improved by revealing the queue, the authors demonstrate that when the offered load is high enough, it is optimal to have only a fraction of informed customers, i.e., social welfare does not always increase by revealing the queue length to everyone. In short, the presence of uninformed customers improves throughput under low offered loads and increases social welfare under high offered loads. Despite its practical importance, the question of timing of the announcements remains understudied, with the vast majority of papers assuming that the announcement is given immediately upon arrival of the delayed customer. At a high level, the trade-off is as follows: Postponing the announcement allows the firm to make a more informed decision about whether or not to admit the customer. With more information at its disposal because of the delay in making the announcement, the firm should benefit. However, postponing the announcement also means potentially keeping customers longer in queue. Thus, it is not clear whether a firm would want to resort to this postponement. Allon and Bassamboo BIB016 address this question in the context of an unobservable M/M/N queue; the model specifics are, otherwise, similar to Allon et al. . The authors focus on identifying conditions under which influential cheap talk emerges in equilibrium. To model the system with postponed announcements, they consider a two-stage system. The first stage, which models, for example, a call center's IVR, is an infinite-server queue which is essentially a delay station. The second stage is an M/M/N queue: Upon entry to this M/M/N queue, the firm makes a non-verifiable cheap talk type of delay announcement. The authors characterize the optimal admission policy for the firm in the second stage and demonstrate that it is of a threshold type where the threshold depends on the number of customers in the first stage. They also characterize the set of possible equilibria in the delayed cheap talk game and compare these to the non-delayed game. They show that such a comparison is complex: The firm may or may not benefit, i.e., create credibility and impact customer behavior, from delaying the delay information. Pender et al. also consider the impact of delaying the delay announcements. Specifically, they study the oscillation behavior observed in both He and Down BIB013 and Dong et al. BIB018 . They use two deterministic fluid models to examine the effect of providing customers with delayed delay information. In particular, they consider two systems: System I consists of two infinite-server queues where arriving customers receive delayed information about the queue length. The delay in information is quantified by a deterministic parameter Δ. Customers choose which queue to join depending on the delayed delay information that they receive, in accordance with a multinomial logit customer choice model. By analyzing the dynamics of the resulting fluid model, the authors demonstrate that there is asynchronous behavior between the two queues if Δ is large enough, i.e., there are systematic oscillations and no stable equilibrium. System II also consists of two infinite-server queues, but the delay information is in the form of a time-average of the queue-length information in a window of length Δ instead. In this case as well, the authors demonstrate a similar asynchronous behavior between the two queues if the window over which the average is taken is long enough. Roet-Green and Hassin also consider a setting where customers learn delayed information about the queue length in the system but, contrary to Pender et al. , the delay in information is assumed to be random (exponentially distributed), corresponding to the travel time needed for a customer to join the queue after the delay information is received. In other words, customer joining decisions are not instantaneous. A customer joining strategy is a vector that assigns a probability of traveling to each possible queue length. Because the travel time is not negligible, a customer who had decided to join a system based on "old" queue-length information may decide to balk upon arrival to the system if the real-time queue length is too long. Thus, customer decisions are made at two successive epochs. The authors investigate the structure of a symmetric Nash equilibrium. They find that customers often adopt a double-threshold strategy: Customers travel when the queue length is short, balk or mix between balking and traveling when the queue length is at an intermediate length, and travel when the queue length is long. The intuition is that a customer who observes an intermediate queue assumes that previous customers must have observed short queues, and are now on their way. Thus, the system's congestion is likely to soon increase and, consequently, the customer decides to balk. The intuition is reversed when a customer observes a long queue: In this case, that customer assumes that previous customers must have observed an intermediate queue and balked. Thus, the congestion in the system is likely to soon decrease, and the customer decides to join the queue. The authors also demonstrate that social welfare may be higher under the no-information model than under the delayed information model. Hu and Wang consider a setting where customers share queue-length information with each other. Because information is shared at the arrival epoch of an arriving customer, it constitutes lagged information for a future customer who wishes to join the system based on this "historical" information. Customers decide to join or balk based on previous information, but do not update their decisions upon arrival to the system because they do not observe the queue length in the second stage, unlike in RoetGreen and Hassin . Indeed, they observe the queue length only upon entering the system. The authors investigate how this shared information structure affects throughput, expected queue length, and social welfare in the system, and draw comparisons between the full-information and no-information models. They find that (i) throughput under shared information is less than that under full information; (ii) the expected queue length under shared information is less than that under full information; and (iii) social welfare may be lower or higher under shared information, depending on the offered load in the system.
|
Sharing delay information in service systems: a literature survey <s> Joint optimization: announcements and other controls <s> Motivated by practices in customer contact centers, we consider a system that offers two modes of service: real-time and postponed with a delay guarantee. Customers are informed of anticipated delays and select their preferred option of service. The resulting system is a multiclass, multiserver queueing system with state-dependent arrival rates. We propose an estimation scheme for the anticipated real-time delay that is asymptotically correct, and a routing policy that is asymptotically optimal in the sense that it minimizes real-time delay subject to the deadline of the postponed service mode. We also show that our proposed state-dependent scheme performs better than a system in which customers make decisions based on steady-state waiting-time information. Our results are derived using an asymptotic analysis based on "many-server" limits for systems with state-dependent parameters. <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> Joint optimization: announcements and other controls <s> Organizations worldwide use contact centers as an important channel of communication and transaction with their customers. This paper describes a contact center with two channels, one for real-time telephone service, and another for a postponed call-back service offered with a guarantee on the maximum delay until a reply is received. Customers are sensitive to both real-time and call-back delay and their behavior is captured through a probabilistic choice model. The dynamics of the system are modeled as anM/M/N multiclass system. We rigorously justify that as the number of agents increases, the system's load approaches its maximum processing capacity. Based on this observation, we perform an asymptotic analysis in the many-server, heavy traffic regime to find an asymptotically optimal routing rule, characterize the unique equilibrium regime of the system, approximate the system performance, and finally, propose a staffing rule that picks the minimum number of agents that satisfies a set of operational constraints on the performance of the system. <s> BIB002 </s> Sharing delay information in service systems: a literature survey <s> Joint optimization: announcements and other controls <s> We study how to use delay announcements to manage customer expectations while allowing the firm to prioritize among customers with different sensitivities to time and value. We examine this problem by developing a framework which characterizes the strategic interaction between the firm and heterogeneous customers. When the firm has information about the state of the system, yet lacks information on customer types, delay announcements play a dual role: they inform customers about the state of the system, while they also have the potential to elicit information on customer types based on their response to the announcements. The tension between these two goals has implications to the type of information that can be shared credibly. To explore the value of the information on customer types, we also study a model where the firm can observe customer types. We show that having information on the customer type may improve or hurt the credibility of the firm. While the creation of credibility increases the firm's profit, the loss of credibility does not necessarily hurt its profit. <s> BIB003
|
Because delay announcements are levers of control in the system, it is natural to investigate how a manager may jointly optimize the announcements with other levers of control, such as staffing and scheduling decisions. Armony and Maglaras BIB001 BIB002 study joint routing and delay announcement decisions in the context of a call center which offers a call-back option to delayed customers. Specifically, callers are informed, upon arrival, of their predicted waiting time for realtime service, and a delay guarantee for postponed service. There is a continuum of delay-sensitive customer types, and customers assign utilities to joining either queue, and join the queue corresponding to the highest utility. The problem is how to provide accurate delay estimates and decide on an accompanying routing rule which guarantees that the postponed service is offered within the specified deadline. This problem is analytically difficult to solve, primarily because future arrivals from the postponed service may affect the waiting times of customers who are already in queue. Thus, the authors focus on the many-server heavy-traffic Halfin-Whitt regime instead. Under this regime, the authors show that using a local version of Little's law, i.e., announcing the queue length encountered upon arrival divided by the arrival rate, is asymptotically consistent (it becomes accurate in large systems) under a threshold-type routing rule which is asymptotically compliant (satisfying the delay guarantee constraint). Specifically, the manager gives priority to real-time service customers, so long as the queue-length for the postponed service does not exceed a given threshold. While Armony and Maglaras BIB002 focuses on steady-state delay information, Armony and Maglaras BIB001 considers state-dependent delay information instead. In comparing the performance of the system with steady-state or state-dependent delay information, the authors show that state-dependent information increases resource utilization while improving the quality of service for real-time service. Yu et al. BIB003 also consider a setting where a profit-maximizing firm uses the announcements in conjunction with optimizing a routing rule, but where customer types are unobservable to the manager. Because customers are heterogeneous in both their delay costs and the values drawn from service, the firm may gain from customer segmentation through a priority service discipline. There is information asymmetry in the model: While the firm has private information about the congestion level in the system, customers have private information about their types. Since information on customer types is not observable by the firm, the announcements play a dual role: They inform customers about upcoming (expected) delays, and they are means to eliciting information about customer types. In other words, the priority discipline used by the firm depends on the announcements given. The authors examine the ability of the firm to sustain an equilibrium with influential cheap talk in the above setting, and distinguish between two cases, depending on whether the two customer classes considered have homogeneous or heterogeneous holding costs. In the homogeneous case, they show that the firm can achieve its unconstrained first-best profit, where it has both full information and full control over customers, through the provision of delay announcements. In particular, a partial segmentation of the customer population may be sufficient to achieve maximal profit. Moreover, under certain conditions, not differentiating customers at all may be the profit-maximizing strategy. In the heterogeneous case, the firm can no longer achieve its first-best through the announcements. Nevertheless, it can improve its profits by giving priority to customers who receive the highest announcements. The authors also characterize babbling equilibria in the system, where no credible information is shared with customers so that the state of the system and the announcement given by the firm are independent; they also compare babbling equilibria to influential equilibria where the firm communicates credible information to customers. They find that providing credible delay information always increases the firm's profit, but may improve or hurt the expected total customer utility. Ibrahim also takes the view that the announcements can be used as a control tool which can be optimized jointly with other controls. In particular, the focus there is on a queueing system where the number of servers is random. This setting arises in sharing-economy applications, for example, because of the self-scheduling behavior of work-from-home call center agents. Because agents show up at random, there are congested periods in the system. Because of this congestion, the abandonment distribution plays an important role. In particular, it can be controlled, via delay announcements, to alleviate the cost of self-scheduling. The author studies how to control the announcements, along with other tools, namely the compensation offered to agents and the staffing level in the system, in order to minimize costs.
|
Sharing delay information in service systems: a literature survey <s> Empirical studies <s> An algorithm is developed to rapidly compute approximations for all the standard steady-state performance measures in the basic call-center queueing modelM/GI/s/r+GI, which has a Poisson arrival process, independent and identically distributed (IID) service times with a general distribution,s servers,r extra waiting spaces and IID customer abandonment times with a general distribution. Empirical studies of call centers indicate that the service-time and abandon-time distributions often are not nearly exponential, so that it is important to go beyond the MarkovianM/M/s/r+M special case, but the general service-time and abandon-time distributions make the realistic model very difficult to analyze directly. The proposed algorithm is based on an approximation by an appropriate MarkovianM/M/s/r+M(n) queueing model, whereM(n) denotes state-dependent abandonment rates. After making an additional approximation, steady-state waiting-time distributions are characterized via their Laplace transforms. Then the approximate distributions are computed by numerically inverting the transforms. Simulation experiments show that the approximation is quite accurate. The overall algorithm can be applied to determine desired staffing levels, e.g., the minimum number of servers needed to guarantee that, first, the abandonment rate is below any specified target value and, second, that the conditional probability that an arriving customer will be served within a specified deadline, given that the customer eventually will be served, is at least a specified target value. <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> Empirical studies <s> We model the decision-making process of callers in call centers as an optimal stopping problem. After each waiting period, a caller decides whether to abandon a call or continue to wait. The utility of a caller is modeled as a function of her waiting cost and reward for service. We use a random-coefficients model to capture the heterogeneity of the callers and estimate the cost and reward parameters of the callers using the data from individual calls made to an Israeli call center. We also conduct a series of counterfactual analyses that explore the effects of changes in service discipline on resulting waiting times and abandonment rates. Our analysis reveals that modeling endogenous caller behavior can be important when major changes such as a change in service discipline are implemented and that using a model with an exogenously specified abandonment distribution may be misleading. ::: ::: This paper was accepted by Assaf Zeevi, stochastic models and simulation. <s> BIB002 </s> Sharing delay information in service systems: a literature survey <s> Empirical studies <s> Credible queueing models of human services acknowledge human characteristics. A prevalent one is the ability of humans to abandon their wait, for example while waiting to be answered by a telephone agent, waiting for a physician's checkup at an emergency department, or waiting for the completion of an internet transaction. Abandonments can be very costly, to either the service provider (a forgone profit) or the customer (deteriorating health after leaving without being seen by a doctor), and often to both. Practically, models that ignore abandonment can lead to either over- or under-staffing; and in well-balanced systems (e.g., well-managed telephone call centers), the "fittest (needy) who survive" and reach service are rewarded with surprisingly short delays. Theoretically, the phenomenon of abandonment is interesting and challenging, in the context of Queueing Theory and Science as well as beyond (e.g., Psychology). Last, but not least, queueing models with abandonment are more robust and numerically stable, when compared against their abandonment-ignorant analogues. For our relatively narrow purpose here, abandonment of customers, while queueing for service, is the operational manifestation of customer patience, perhaps impatience, or (im)patience for short. This (im)patience is the focus of the present paper. It is characterized via the distribution of the time that a customer is willing to wait, and its dynamics are characterized by the hazard-rate of that distribution. We start with a framework for comprehending impatience, distinguishing the times that a customer expects to wait, is required to wait (offered wait), is willing to wait (patience time), actually waits and felt waiting. We describe statistical methods that are used to infer the (im)patience time and offered wait distributions. Then some useful queueing models, as well as their asymptotic approximations, are discussed. In the main part of the paper, we discuss several "data-based pictures" of impatience. Each "picture" is associated with an important phenomenon. Some theoretical and practical problems that arise from these phenomena, and existing models and methodologies that address these problems, are outlined. The problems discussed cover statistical estimation of impatience, behavior of overloaded systems, dependence between patience and service time, and validation of queueing models. We also illustrate how impatience changes across customers (e.g., VIP vs. regular customers), during waiting (e.g., in response to announcements) and through phases of service (e.g., after experiencing the answering machine over the phone). Our empirical analysis draws data from repositories at the Technion SEELab, and it utilizes SEEStat--its online Exploratory Data Analysis environment. SEEStat and most of our data are internet-accessible, which enables reproducibility of our research. <s> BIB003 </s> Sharing delay information in service systems: a literature survey <s> Empirical studies <s> We undertake an empirical study of the impact of delay announcements on callers’ abandonment behavior and the performance of a call center with two priority classes. A Cox regression analysis reveals that in this call center, callers’ abandonment behavior is affected by the announcement messages heard. To account for this, we formulate a structural estimation model of callers’ (endogenous) abandonment decisions. In this model, callers are forward-looking utility maximizers and make their abandonment decisions by solving an optimal stopping problem. Each caller receives a reward from service and incurs a linear cost of waiting. The reward and per-period waiting cost constitute the structural parameters that we estimate from the data of callers’ abandonment decisions as well as the announcement messages heard. The call center performance is modeled by a Markovian approximation. The main methodological contribution is the definition of an equilibrium in steady state as one where callers’ expectation of their waiting time, which affects their (rational) abandonment behavior, matches their actual waiting time in the call center, as well as the characterization of such an equilibrium as the solution of a set of nonlinear equations. A counterfactual analysis shows that callers react to longer delay announcements by abandoning earlier, that less patient callers as characterized by their reward and cost parameters react more to delay announcements, and that congestion in the call center at the time of the call affects caller reactions to delay announcements. <s> BIB004 </s> Sharing delay information in service systems: a literature survey <s> Empirical studies <s> In this paper, we explore the impact of delay announcements using an empirical approach by analyzing the data from a medium-sized call center. We first explore the question of whether delay announcements impact customers’ behavior using a nonparametric approach. The answer to this question appears to be ambiguous. We thus turn to investigate the fundamental mechanism by which delay announcements impact customer behavior, by constructing a dynamic structural model. In contrast to the implicit assumption made in the literature that announcements do not directly impact customers’ waiting costs, our key insights show that delay announcements not only impact customers’ beliefs about the system but also directly impact customers’ waiting costs. In particular, customers’ per-unit waiting cost decreases with the offered waiting times associated with the announcements. The results of our counterfactual analysis show that it may not be necessary to provide announcements with very fine granularity. This paper was accepted by Yossi Aviv, operations management . <s> BIB005 </s> Sharing delay information in service systems: a literature survey <s> Empirical studies <s> We explore whether customers are loss averse in time and how the amount of delay information available may impact such reference-dependent behavior by conducting a field experiment at a call center. Our results show that customers exhibit loss averse regardless of the availability or accuracy of the delay information. While delay announcements may not alter the fact that customers are loss averse, they do seem to impact the reference points customers use when the announcements are accurate. However, when those announcements are not accurate, customers may completely disregard them. <s> BIB006
|
The literature above is analytical in nature. The recent availability of granular data, for example, at the call-by-call level in call centers, has made it possible to study changes in customer behavior, in response to the announcements. We now recap the main results from those papers. Early empirical evidence which illustrates how customers update their patience times in response to delay announcements, in call centers, can be found in Mandelbaum and Zeltyn BIB003 and Feigin . Akşin et al. BIB004 undertake a more detailed empirical study to explore the impact on customer behavior and, in turn, on system performance, due to the announcements. The authors begin by providing empirical evidence, using a Cox regression analysis, substantiating the impact of the announcements on the abandonment behavior of (call center) customers. Their data set has two priorities, and the announcements are equal to the queue position or the elapsed waiting time of the longest waiting customer; they are also made sequentially over time. The study reveals that both the composition and sequence of the announcements have an impact on customer abandonment behavior, and that customers who receive longer announcements, or see a deteriorating delay condition (increasing announcements during their wait), abandon earlier. The impact of the announcements is also affected by the priority class of the customer. In order to explore the operational impact of the announcements, the authors use a structural estimation approach: They model callers' abandonment decisions as in the optimal stopping time model introduced in Akşin et al. BIB002 . Specifically, time is divided into periods, and a customer makes a decision on whether or not to abandon at the beginning of each period. Customers are heterogeneous in both the rewards that they receive from service and their per-unit waiting costs (both of these are drawn from lognormal distributions). The announcements received impact the abandonment distribution of callers which, in turn, impacts their decisions on staying or reneging, sequentially over time. The parameters of that endogenous model for caller abandonment are estimated from data, for each priority class. In order to study the impact of the announcements, the authors assume a setting where customers receive only one announcement upon arrival. By relying on the approximation in Whitt BIB001 , they characterize the equilibrium that arises in the system in steady state, where the equilibrium is defined as one where the distribution of waiting times based on the optimal stopping time model coincides with the distribution of the waiting time using the approximation from Whitt BIB001 . Through a simulation study, Akşin et al. BIB004 then study the operational impact of the announcements. Their main conclusions are as follows: (i) delay information helps customers make better decisions in the sense that callers who receive a long (short) delay announcement abandon more and faster (less and slower); (ii) the impact of the announcements is strongest when the state of the system is congested; and (iii) the increased granularity of the wait-time announcement (exact queue length position vs. range for the number in queue) leads to a smoother change in caller behavior. Yu et al. BIB005 also adopt an empirical approach in studying the impact of delay announcements on customer patience. They begin by introducing the concepts of informative and influential announcements. An informative announcement is one that carries information about the current congestion level in the system, i.e., one where longer delays do indeed correspond to larger announcements. An influential announcement is one where the patience of customers changes in response to the announcements. By statistically comparing the survival distributions of customers, the authors find that the impact of the announcements is ambiguous: Some announcements are influential and/ or informative, whereas others are not. This prompted the authors to undertake a deeper investigation into the dynamics of the performance impact of the announcements; they did so by relying on a structural estimation approach. The structural model is as follows: Customers may return multiple times and, at each return, receive multiple delay announcements during their wait. At each announcement epoch, the caller revisits their decision of staying until service or reneging. Customers are heterogeneous, but their heterogeneity is modeled through their cost-reward ratio rather than separately through their service rewards and waiting costs. The costreward ratios and variance of idiosyncratic shocks are then estimated from data. The authors consider two models: (i) a base model where customers update their beliefs about offered waits using the announcements received; (ii) a refined model where not only customer beliefs but also the waiting costs of customers are impacted by the announcements. The authors find that their second model explains the ambiguous impact on customer impatience observed earlier in their data analysis. In particular, they show that while the cost-reward ratio decreases in the offered wait associated with the announcements ("I waited so long already, so why not wait a little longer?"), the variance of the idiosyncratic shocks increases. This dual effect explains the nontrivial impact of the announcements on customer behavior. The authors then explore, through a simulation study, what managerial implications can be drawn from their analysis. In particular, they find that providing delay announcements leads to an increase in the surplus of customers (surplus is equal to reward minus waiting cost), and that less refined delay information (in the form of three signals on the congestion of the system) may lead to higher customer surplus than more granular information. Yu et al. BIB006 undertake a field experiment in an Israeli bank's call center to explore the loss aversion of customers in time, and its dependence on the delay information available. Specifically, customers who receive delay announcements typically form a reference point based on the announcement received. If the actual waiting time experienced is smaller than that reference point, then the time difference is considered a gain. If the actual waiting time experienced is larger, then the time difference is considered a loss. Loss aversion means that customers value lost time more than they value gained time. Customers are either provided with accurate, inaccurate, or no announcements. By using a structural model to infer the customers' value of time (the abandonment behavior is modeled through an optimal stopping time problem), the authors find that customers indeed exhibit loss aversion, and that this is independent of the correctness of the delay information given. (Loss aversion is measured through an increase in the per-unit waiting cost after the announcement.) However, the accuracy of the delay announcement does have an impact on the reference point formed. Specifically, with accurate information, the reference point coincides with the delay information given, whereas with inaccurate information, customers use the observed average delay as a reference point instead. This contradicts the standard viewpoint that firms should give an inaccurate but high announcement to make the customers "feel better about their waits." Indeed, the analysis suggests that customers may disregard such inaccurate announcements but retain their loss aversion. In a related paper, Webb et al. rely on a proportional hazards model for the hazard rate of the abandonment distribution instead. The covariates used in that model include the gain and loss in time effects due to the announcements. In particular, the announcement creates a reference point which is the expectation of the wait time for service. The authors find that a model in which customers react to the announced value of the first announcement, and in which reference points are induced by the first two announcements, is the best fit to their data. They also find that customers are loss averse, that they fall for sink cost effects, and that a higher announcement leads to more abandonment. Finally, they study implications on staffing decisions and find that firms who take behavioral implications of the announcements into account can significantly reduce their staffing levels.
|
Sharing delay information in service systems: a literature survey <s> Accuracy and performance impact <s> An algorithm is developed to rapidly compute approximations for all the standard steady-state performance measures in the basic call-center queueing modelM/GI/s/r+GI, which has a Poisson arrival process, independent and identically distributed (IID) service times with a general distribution,s servers,r extra waiting spaces and IID customer abandonment times with a general distribution. Empirical studies of call centers indicate that the service-time and abandon-time distributions often are not nearly exponential, so that it is important to go beyond the MarkovianM/M/s/r+M special case, but the general service-time and abandon-time distributions make the realistic model very difficult to analyze directly. The proposed algorithm is based on an approximation by an appropriate MarkovianM/M/s/r+M(n) queueing model, whereM(n) denotes state-dependent abandonment rates. After making an additional approximation, steady-state waiting-time distributions are characterized via their Laplace transforms. Then the approximate distributions are computed by numerically inverting the transforms. Simulation experiments show that the approximation is quite accurate. The overall algorithm can be applied to determine desired staffing levels, e.g., the minimum number of servers needed to guarantee that, first, the abandonment rate is below any specified target value and, second, that the conditional probability that an arriving customer will be served within a specified deadline, given that the customer eventually will be served, is at least a specified target value. <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> Accuracy and performance impact <s> In this paper, we consider two basic multi-class call center models, with and without reneging. Customer classes have different priorities. The content of different types of calls is assumed to be similar allowing their service times to be identical. We study the problem of announcing delays to customers upon their arrival. For the simplest model without reneging, we give a method to estimate virtual delays that is used within the announcement step. For the second model, we first build the call center model incorporating reneging. The model takes into account the change in customer behavior that may occur when delay information is communicated to them. In particular, it is assumed that customer reneging is replaced by balking that depends on the state of the system in this case. We develop a method based on Markov chains in order to estimate virtual delays of new arrivals for this model. Finally, some practical issues concerning delay announcement are discussed. <s> BIB002 </s> Sharing delay information in service systems: a literature survey <s> Accuracy and performance impact <s> In this paper, we analyze a call center with impatient customers. We study how informing customers about their anticipated delays affects performance. Customers react by balking upon hearing the delay announcement and may subsequently renege, particularly if the realized waiting time exceeds the delay that has originally been announced to them. The balking and reneging from such a system are a function of the delay announcement. Modeling the call center as an M/M/s + M queue with endogenized customer reactions to announcements, we analytically characterize performance measures for this model. The analysis allows us to explore the role announcing different percentiles of the waiting time distribution, i.e., announcement coverage, plays on subsequent performance in terms of balking and reneging. Through a numerical study, we explore when informing customers about delays is beneficial and what the optimal coverage should be in these announcements. We show how managers of a call center with delay announcements can control the trade-off between balking and reneging through their choice of announcements to be made. <s> BIB003 </s> Sharing delay information in service systems: a literature survey <s> Accuracy and performance impact <s> Credible queueing models of human services acknowledge human characteristics. A prevalent one is the ability of humans to abandon their wait, for example while waiting to be answered by a telephone agent, waiting for a physician's checkup at an emergency department, or waiting for the completion of an internet transaction. Abandonments can be very costly, to either the service provider (a forgone profit) or the customer (deteriorating health after leaving without being seen by a doctor), and often to both. Practically, models that ignore abandonment can lead to either over- or under-staffing; and in well-balanced systems (e.g., well-managed telephone call centers), the "fittest (needy) who survive" and reach service are rewarded with surprisingly short delays. Theoretically, the phenomenon of abandonment is interesting and challenging, in the context of Queueing Theory and Science as well as beyond (e.g., Psychology). Last, but not least, queueing models with abandonment are more robust and numerically stable, when compared against their abandonment-ignorant analogues. For our relatively narrow purpose here, abandonment of customers, while queueing for service, is the operational manifestation of customer patience, perhaps impatience, or (im)patience for short. This (im)patience is the focus of the present paper. It is characterized via the distribution of the time that a customer is willing to wait, and its dynamics are characterized by the hazard-rate of that distribution. We start with a framework for comprehending impatience, distinguishing the times that a customer expects to wait, is required to wait (offered wait), is willing to wait (patience time), actually waits and felt waiting. We describe statistical methods that are used to infer the (im)patience time and offered wait distributions. Then some useful queueing models, as well as their asymptotic approximations, are discussed. In the main part of the paper, we discuss several "data-based pictures" of impatience. Each "picture" is associated with an important phenomenon. Some theoretical and practical problems that arise from these phenomena, and existing models and methodologies that address these problems, are outlined. The problems discussed cover statistical estimation of impatience, behavior of overloaded systems, dependence between patience and service time, and validation of queueing models. We also illustrate how impatience changes across customers (e.g., VIP vs. regular customers), during waiting (e.g., in response to announcements) and through phases of service (e.g., after experiencing the answering machine over the phone). Our empirical analysis draws data from repositories at the Technion SEELab, and it utilizes SEEStat--its online Exploratory Data Analysis environment. SEEStat and most of our data are internet-accessible, which enables reproducibility of our research. <s> BIB004 </s> Sharing delay information in service systems: a literature survey <s> Accuracy and performance impact <s> Motivated by the recent interest in making delay announcements in large service systems, such as call centers, we investigate the accuracy of announcing the waiting time of the last customer to enter service (LES). In practice, customers typically respond to delay announcements by either balking or by becoming more or less impatient, and their response alters system performance. We study the accuracy of the LES announcement in single-class, multiserver Markovian queueing models with announcement-dependent customer behavior. We show that, interestingly, even in this stylized setting, the LES announcement may not always be accurate. This motivates the need to study its accuracy carefully and to determine conditions under which it is accurate. Since the direct analysis of the system with customer response is prohibitively difficult, we focus on many-server, heavy-traffic analysis instead. We consider the quality-and-efficiency-driven and efficiency-driven many-server, heavy-traffic regimes and prove, under b... <s> BIB005
|
One fundamental idea is that the announcements help by deterring the most impatient customers from waiting, i.e., by converting late abandonment into early balking. Indeed, since those customers would have abandoned anyway, inciting them to abandon immediately upon arrival, for example, balk, should help in reducing congestion in the system while not affecting throughput. Replacing exponential reneging with balking. We begin with the case where customers who receive delay information consider it to be truthful, know their personal preferences, and are able to decide, upon arrival, whether they would be willing to wait at all. In this case, all reneging is replaced by balking because of the announcement. Whitt adopts this view, and compares two single-class M/M/s/r queueing systems (where r denotes the maximum queue length allowed) with reneging and balking. In particular, Model 1 assumes that customers balk with a given probability and otherwise join the system and may renege after some time. Model 2 assumes that customers are given system-state information upon arrival, for example, the queue length. In Model 2, all reneging is replaced with balking at arrival. Because of the dynamics of customer response, and conditional on the system state seen upon arrival, a customer does not take other customers' actions into account when making her own decision to join or balk. By analyzing general birth-and-death processes, with announcement-dependent rates, Whitt shows that the number of customers in Model 1 is larger in Model 2 in the likelihood-ratio stochastic ordering sense. In other words, the announcements lead to an improvement in performance by converting reneging after some delay with balking upon arrival. Jouini et al. BIB002 extend Whitt by considering a system with two customer classes and a non-preemptive priority service discipline. In a model where customers replace subsequent reneging with balking upon arrival, as in Whitt , the authors do analysis to derive balking probabilities and moment expressions for the virtual waiting times of the high and low priority customers. In practice, delay announcements do not convert reneging entirely into balking BIB004 . Indeed, it seems more common that the most impatient customers balk in response to the information, while more patient customers elect to stay but update their patience levels, depending on the announcement. If the announcement is long, then there will be more balking, and less subsequent reneging, and vice versa. Thus, there is a trade-off between reneging and balking, based on the announcement. This is one of the main ideas in Jouini et al. BIB003 . The authors consider a delay announcement which is equal to a fixed percentile of the waiting time, conditional on the queue length seen upon arrival, and study how varying that percentile, or coverage β, impacts performance in the system. Jouini et al. BIB003 consider three models: Model 0 assumes that the delay information is exact, and arriving customers respond to that information by balking upon arrival if their patience falls below that threshold; there is no subsequent balking in the system. This model is in the same spirit as Whitt , and it is argued that this is indeed a reasonable model when customers fully trust the information that they are given. Model 1 assumes no delay announcement, and that a higher proportion of customers balks upon arrival because of the lack of information; customers may later renege if their patience expires before reaching service. Model 2 introduces the idea of a coverage-based announcement, where the firm announces a given percentile of the waiting-time distribution. In this model, customers update their patience based on the announcements that they receive: The updated patience rate, γ , is equal to a combination of their individual patience before the announcement, and the delay information received (later approximated by an exponential distribution for the analysis). Under an exponential assumption on the announcement-dependent abandonment, the authors rely on the analysis of birth-and-death models to analyze the performance impact of the announcements. For consistency, the announcement given must coincide with the fractile of the stationary delay distribution. Thus, an equilibrium analysis is needed, and the announcement-dependent abandonment rate is derived based on a fixed-point algorithm. This algorithm reveals the dependence of γ on β. Thus, varying β leads to different performance in the system. The authors find that, all else held constant, an announcement with more coverage leads to higher balking in lieu of late abandonment from the queue. However, through investigating the value of the "optimal" coverage (minimizing the balking probability, subject to a constraint on the reneging probability) an important insight is reached: More coverage, which is equivalent to more precise delay information, at the expense of a larger announcement, is not always better for the service provider. Indeed, this would depend on a host of factors, including the way in which customers react to the specific announcements that they receive. Non-exponential but smooth abandonment. Armony et al. go beyond the exponential assumption on the abandonment distribution, in response to the announcement. Direct analysis is hard, and the authors rely on two approximation methods to study the resulting equilibrium in the system: (i) a deterministic fluid model and (ii) an iterative numerical algorithm, based on Whitt BIB001 , where general abandonment is approximated by Markovian abandonment with state-dependent rates. The authors focus on the performance impact of making the LES delay announcement. By analyzing the fluid model, they derive conditions on customer response to guarantee the existence and uniqueness of that equilibrium. In the fluid model, LES coincides at equilibrium with a fixed delay announcement (FD), equal to the average equilibrium delay. This motivates the authors to also consider an FD announcement, and they use simulations to study the equilibrium behavior with both LES and FD announcements in the M/G I /n + G I model. They validate both approximation methods, and illustrate that the LES announcement is usually more effective, leading to smaller variance. Using the framework in Armony et al. , one can quantify the value of communicating delay information, for example, by comparing the equilibrium that arises with performance in a system without announcements. This performance impact depends on the assumptions made on the way customers respond to the announcements. Armony et al. do not discuss the accuracy of the individual announcements, which involves quantifying the stochastic fluctuations around equilibrium. This is done, in a similar setting, in Ibrahim et al. BIB005 . The authors demonstrate that the LES announcement, with customer response to the announcements, is asymptotically accurate in both the quality-and-efficiency-driven and efficiency-driven regimes. A main technical issue in the analysis is demonstrating that the stochastic fluctuations around the equilibrium in the system (when it exists and is unique) would not drive the system out of that equilibrium, thus guaranteeing accuracy.
|
Sharing delay information in service systems: a literature survey <s> Abandonment "jumps": <s> Data has revealed a noticeable impact of delay-time-related information on phone-customers; for example and somewhat surprisingly, delay announcements can abruptly increase the likelihood to abandon (hang up). Our starting point is that the latter phenomena can be used to support the control of queue lengths and delays. We do so by timing the announcements appropriately and determining the staffing levels accordingly. To this end, we model a service system as an overloaded GI/M/s + GI queue, in which we seek to minimize the number of servers, s, subject to quality-of-service constraints (e.g., fraction abandoning), while accounting for the instantaneous (hence discontinuous) impact of an announcement on the distribution (hazard rate) of customer patience. For tractability, our analysis is asymptotic as s increases indefinitely, and it is naturally efficiency-driven (namely the servers are highly busy, and hence essentially all customers are delayed in queue prior to service). This requires one to go beyon... <s> BIB001
|
Going beyond the fluid model. Armony et al. illustrate that the fluid model may not be accurate when the abandonment response to the announcements is not smooth, for example, when there is an announcement-dependent "jump" in abandonment, which is consistent with empirical evidence. To analyze the system with such jumps necessitates going beyond the fluid approximation, i.e., a more refined approximation is needed. Such an approximation is presented in Huang et al. BIB001 . Because the announcements play a role in altering customer abandonment, it is conceivable that jointly optimizing the control of announcements along with the staffing level would lead to staffing levels that are different than in the absence of announcements. Huang et al. BIB001 are the first to show this by considering an overloaded G I /M/s + G I queue where they jointly optimize the staffing level and the timing of the announcements, subject to quality-of-service constraints. The announcementdependent hazard rate of the abandonment distribution is assumed to be discontinuous. In particular, they consider two types of delay announcements, corresponding to two types of responses. The first is similar to Armony et al. , where customers who hear an announcement upon arrival have a changed abandonment response at the point of the announcement, as well as balking upon arrival in response to the announcement. The second type of announcement is made during the waiting time, leading to an abrupt increase in the likelihood of abandonment at the announcement epoch. The objective of the paper is to quantify the impact of the non-smooth change in abandonment on system performance and operational decisions. To do so, the authors introduce an approximation based on scaling the patience-time distribution. They substantiate the accuracy of their refined approximation, demonstrate that there is an O( √ λ) reduction in the staffing level due to the announcements, and show that the optimal timing of the announcement coincides with the fluid offered waiting time.
|
Sharing delay information in service systems: a literature survey <s> Future research directions <s> We consider a memoryless queue in which the reward of service completion for an individual reduces to zero after some time. Customers, while comparing expected holding costs and the rewards have to decide if to join the system at all and if they do when to renege. We show that a unique Nash equilibrium exists in which each of the customers joins with some probability and reneges as soon as the reward is zero. <s> BIB001 </s> Sharing delay information in service systems: a literature survey <s> Future research directions <s> We propose a model for abandonments from a queue, due to excessive wait, assuming that waiting customers act rationally but without being able to observe the queue length. Customers are allowed to be heterogeneous in their preferences and consequent behavior. Our goal is to characterize customers' patience via more basic primitives, specifically waiting costs and service benefits: these two are optimally balanced by waiting customers, based on their individual cost parameters and anticipated waiting time. The waiting time distribution and patience profile then emerge as an equilibrium point of the system. The problem formulation is motivated by teleservices, prevalently telephone- and Internet-based. In such services, customers and servers are remote and queues are typically associated with the servers, hence queues are invisible to waiting customers. Our base model is the M/M/m queue, where it is shown that a unique equilibrium exists, in which rational abandonments can occur only upon arrival (zero or infinite patience for each customer). As such a behavior fails to capture the essence of abandonments, the base model is modified to account for unusual congestion or failure conditions. This indeed facilitates abandonments in finite time, leading to a nontrivial, customer dependent patience profile. Our analysis shows, quite surprisingly, that the equilibrium is unique in this case as well, and amenable to explicit calculation. <s> BIB002 </s> Sharing delay information in service systems: a literature survey <s> Future research directions <s> We consider a memoryless first-come first-served queue in which customers' waiting costs are increasing and convex with time. Hence, customers may opt to renege if service has not commenced after waiting for some time. We assume a homogeneous population of customers and we look for their symmetric Nash equilibrium reneging strategy. Besides the model parameters, customers are aware only, if they are in service or not, and they recall for how long they are have been waiting. They are informed of nothing else. We show that under some assumptions on customers' utility function, Nash equilibrium prescribes reneging after random times. We give a closed form expression for the resulting distribution. In particular, its support is an interval (in which it has a density) and it has at most two atoms (at the edges of the interval). Moreover, this equilibrium is unique. Finally, we indicate a case in which Nash equilibrium prescribes a deterministic reneging time. <s> BIB003 </s> Sharing delay information in service systems: a literature survey <s> Future research directions <s> Information about delays can enhance service quality in many industries. Delay information can take many forms, with different degrees of precision. Different levels of information have different effects on customers and therefore on the overall system. To explore these effects, we consider a queue with balking under three levels of delay information: no information, partial information (the system occupancy), and full information (the exact waiting time). We assume Poisson arrivals, independent exponential service times, and a single server. Customers decide whether to stay or balk based on their expected waiting costs, conditional on the information provided. We show how to compute the key performance measures in the three systems, obtaining closed-form solutions for special cases. We then compare the three systems. We identify some important cases where more accurate delay information improves performance. In other cases, however, information can actually hurt the provider or the customers. <s> BIB004 </s> Sharing delay information in service systems: a literature survey <s> Future research directions <s> Aservice encounter is an experience that extends over time. Therefore, its effective management must include the control of the timing of the delivery of each of the service's elements and the enhancement of the customer's experience between and during the delivery of the various elements. This paper provides a conceptual framework that links the duration of a service encounter to behaviors that have been shown to affect profitability. Analysis of the framework reveals a wide gap between the behavioral assumptions typically made in operations research (OR) and operations management (OM) models and the state of the art in the marketing and psychology literature. The central motivations behind this paper are (1) to help the OR and OM community bridge this gap by bringing to its attention recent findings from the behavioral literature that have implications for the design of queueing systems for service firms and (2) to identify opportunities for further research. <s> BIB005 </s> Sharing delay information in service systems: a literature survey <s> Future research directions <s> We model the decision-making process of callers in call centers as an optimal stopping problem. After each waiting period, a caller decides whether to abandon a call or continue to wait. The utility of a caller is modeled as a function of her waiting cost and reward for service. We use a random-coefficients model to capture the heterogeneity of the callers and estimate the cost and reward parameters of the callers using the data from individual calls made to an Israeli call center. We also conduct a series of counterfactual analyses that explore the effects of changes in service discipline on resulting waiting times and abandonment rates. Our analysis reveals that modeling endogenous caller behavior can be important when major changes such as a change in service discipline are implemented and that using a model with an exogenously specified abandonment distribution may be misleading. ::: ::: This paper was accepted by Assaf Zeevi, stochastic models and simulation. <s> BIB006 </s> Sharing delay information in service systems: a literature survey <s> Future research directions <s> We explore whether customers are loss averse in time and how the amount of delay information available may impact such reference-dependent behavior by conducting a field experiment at a call center. Our results show that customers exhibit loss averse regardless of the availability or accuracy of the delay information. While delay announcements may not alter the fact that customers are loss averse, they do seem to impact the reference points customers use when the announcements are accurate. However, when those announcements are not accurate, customers may completely disregard them. <s> BIB007 </s> Sharing delay information in service systems: a literature survey <s> Future research directions <s> In this paper, we explore the impact of delay announcements using an empirical approach by analyzing the data from a medium-sized call center. We first explore the question of whether delay announcements impact customers’ behavior using a nonparametric approach. The answer to this question appears to be ambiguous. We thus turn to investigate the fundamental mechanism by which delay announcements impact customer behavior, by constructing a dynamic structural model. In contrast to the implicit assumption made in the literature that announcements do not directly impact customers’ waiting costs, our key insights show that delay announcements not only impact customers’ beliefs about the system but also directly impact customers’ waiting costs. In particular, customers’ per-unit waiting cost decreases with the offered waiting times associated with the announcements. The results of our counterfactual analysis show that it may not be necessary to provide announcements with very fine granularity. This paper was accepted by Yossi Aviv, operations management . <s> BIB008 </s> Sharing delay information in service systems: a literature survey <s> Future research directions <s> Unoccupied waiting feels longer than it actually is. Service providers recognize this psychological effect and commonly offer entertainment options in waiting areas. To alleviate the cost of offering these entertainment options, many service providers choose to cooperate in this investment while competing against each other on other service dimensions, a practice known as “co-opetition.” In this paper, we develop a parsimonious model of co-opetition in service industries with entertainment options. By comparing the case of co-opetition with two benchmarks (monopoly, and duopoly competition), we demonstrate that a co-opeting service provider can sometimes achieve a profit higher than that in the monopoly setting, especially when the capacity is costly, entertainment options are inexpensive, or customers are highly sensitive to waiting. Our numerical study suggests that on average, the profit under co-opetition can be 7.65% higher than that under monopoly, with a maximum of 77.40%. Such benefits, however, are not guaranteed. We show that as much as coopetition facilitates cost sharing, it also intensifies price competition. In designing the cost-allocation scheme, the pursuit of fairness may backfire and lead to even lower profits than those under duopoly competition. We further show that as the intensity of price competition increases, contrary to what one would expect, both service providers choose to charge higher service fees, albeit while providing higher entertainment levels. <s> BIB009
|
In this section, we identify some "macro-level" themes that we believe would be interesting to investigate in future research. Bridging the psychological and the operational. As mentioned in Bitran et al. BIB005 , there is a general need to narrow the gap between mathematical models of customer response in service systems, and the complex reality of human behavior and psychology. The body of literature devoted to analyzing customer response to the announcements generally assumes that changes in customer behavior arise from individual customers maximizing their expected utilities from service and waiting. Some papers have challenged whether such an approach is always appropriate. For example, Guo and Zipkin BIB004 indicate that relying on utility-based approaches may lead to counter-intuitive results, such as customers preferring more congested states. In the same spirit, Allon et al. indicate that customers may not be expected-utility maximizers and may, for example, prefer accuracy over no accuracy, or information over no information. To wit, extant experimental work from psychology and marketing studies offers important insight on how customers perceive and react to both having to wait for service, and to receiving delay announcements while waiting ( § 1.1). There remains ample opportunity to design more sophisticated models which incorporate such psychological features, to test the validity of those models with data, and to study implications on decision-making in the system. One recent work in that vein is Yu et al. BIB007 , which tests the loss aversion of customers (to waiting) by conducting a field experiment in the context of a call center. Webb et al. study implications on operational decision-making with similar behavioral features. Another relevant work, though not specifically related to delay announcements, is Yuan et al. BIB009 . In this paper, service providers share a common entertainment option, which alleviates the cost of waiting on their customers, but compete on other service dimensions. This duality between cooperation and competition is termed co-opetition. The authors demonstrate that a service provider's profit can increase when engaging in co-opetition. In other words, Yuan et al. BIB009 quantify how a psychological dimension, i.e., making the customer waiting experience more pleasant, can indeed influence traditional operational measures, such as the firm's profit. Further studies, in the same spirit, are interesting venues for future research. Alternative designs for delay announcements. In the literature on delay announcements, it is commonly assumed that a single delay announcement is given to customers, that the manager decides on whether or not to provide the announcement, and that the delay information is given immediately upon arrival. Recent work has begun challenging those assumptions , primarily on issues concerning the timing of the announcement, and how the announcements are actually diffused to the customer population, for example, whether this is done by the manager or by the customers themselves. Recent technological advances have made it possible for firms to obtain a wealth of data about individual customers, and to track the evolution of the service experience in real time. This opens up an opportunity for a better segmentation of (heterogeneous) customers, for example, via targeted delay announcements, and a study of the implications of such segmentation on performance in the system. Optimizing the granularity of the information shared with such heterogeneous customers, potentially sequentially during their stay, is an interesting topic for future research. Indeed, experimental evidence suggests that people value a sense of progress during their waiting times, which can be made possible through the announcements; for example, see Munichor and Rafaeli . Moreover, by targeting customers with (multiple) different announcements, the firm can incite different abandonment behaviors. While models for rational customer abandonment have been advanced in some papers ( BIB001 BIB003 BIB002 , etc.) and have been substantiated empirically in others BIB006 BIB008 , systems with endogenous abandonment, which is dependent on delay announcements, remain understudied in general. In a context with announcement-dependent abandonment, jointly optimizing the provision of announcements and the scheduling of those impatient customers (for example, in the spirit of ) would be possible. Further exploration of, for example, the design of a system with multiple announcements, the study of the dynamic impact of such sequential announcements on customer behavior, and the analysis of corresponding implications on the operational management of the system, and on various related objectives, remain interesting venues for future research. Toward building a service science. In this survey paper, we reviewed papers taking different approaches to the effective management of delay announcements in service systems. The overall objective of that rich body of work is to build a service science. With that goal in mind, it is important to systematically study different queueing models with various complexities, and to paint a complete picture of the impact and accuracy of delay announcements. Therefore, it is important to mention that several model extensions remain under-explored, despite their prevalence in practice. Here is a noncomprehensive list of such extensions: non-stationary and non-Poisson arrival models; alternative service disciplines (beyond FCFS); multiple classes with heterogeneous service rates and/or heterogeneous abandonment rates; queueing networks; queues where capacity is uncertain, for example, due to the self-scheduling behavior of agents; queues where different levels of information are missing, for example, on service, arrival, and abandonment rates, and on customer types and classification, etc.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.