reference
stringlengths 141
444k
| target
stringlengths 31
68k
|
---|---|
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Offset Estimation via Particle Filtering <s> Brief review of linear algebra and linear systems brief review of probability theory brief review of statistics some basic concepts in estimation linear estimation in static systems linear dynamic systems with random inputs state estimation in discrete-time linear dynamic systems estimation for Kinematic models computational aspects of estimation extensions of discrete-time estimation continuous-time linear state estimation state estimation for nonlinear dynamic systems adaptive estimation and manoeuvering targets problem solutions. <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Offset Estimation via Particle Filtering <s> This paper presents a joint channel coefficient and time-delay tracking technique for code-division multiple-access (CDMA) systems. Due to the highly nonlinear nature of time delay estimation, an iterative nonlinear filtering algorithm, called the "unscented filter" (UF), is employed. The UF can provide a better alternative to nonlinear filtering than the conventional extended Kalman filter (EKF) since it avoids errors associated with linearization. The Cramer-Rao lower bound is derived for the estimator, and computer simulations show that it provides a more viable means for tracking time-varying amplitudes and delays in CDMA communication systems than estimators based on the EKF. <s> BIB002 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Offset Estimation via Particle Filtering <s> A package for fragile articles is provided which includes a container having a bottom surface delimited by upright walls, and a plurality of article-loaded trays arranged in a compact, stacked, superposed relation and disposed within the container. Each tray has a plurality of article accommodating pockets separated by struts. The struts of a tray are in vertical alignment with the pockets of the trays disposed immediately above and below. The peripheries of the stacked trays are adapted to be vertically supported by the adjacent walls of the container thereby preventing sagging of certain of the peripheral pockets during storage or transporting of the loaded container. <s> BIB003 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Clock Offset Estimation via Particle Filtering <s> Wireless ad-hoc sensor networks have emerged as an interesting and important research area in the last few years. The applications envisioned for such networks require collaborative execution of a distributed task amongst a large set of sensor nodes. This is realized by exchanging messages that are time-stamped using the local clocks on the nodes. Therefore, time synchronization becomes an indispensable piece of infrastructure in such systems. For years, protocols such as NTP have kept the clocks of networked systems in perfect synchrony. However, this new class of networks has a large density of nodes and very limited energy resource at every node; this leads to scalability requirements while limiting the resources that can be used to achieve them. A new approach to time synchronization is needed for sensor networks.In this paper, we present Timing-sync Protocol for Sensor Networks (TPSN) that aims at providing network-wide time synchronization in a sensor network. The algorithm works in two steps. In the first step, a hierarchical structure is established in the network and then a pair wise synchronization is performed along the edges of this structure to establish a global timescale throughout the network. Eventually all nodes in the network synchronize their clocks to a reference node. We implement our algorithm on Berkeley motes and show that it can synchronize a pair of neighboring motes to an average accuracy of less than 20ms. We argue that TPSN roughly gives a 2x better performance as compared to Reference Broadcast Synchronization (RBS) and verify this by implementing RBS on motes. We also show the performance of TPSN over small multihop networks of motes and use simulations to verify its accuracy over large-scale networks. We show that the synchronization accuracy does not degrade significantly with the increase in number of nodes being deployed, making TPSN completely scalable. <s> BIB004
|
In Figure 5 , the th k up and down link delay observations corresponding to the th k timing message exchange are assumed to be given by delays, which are assumed to be any distributions such as Gaussian, exponential, Gamma, and Weibull. Given the observation samples [ , ] Since the clock offset value is constant, the clock offset is assumed to obey a Gauss-Markov dynamic state-space channel model BIB002 (23) and . Under the Bayesian framework, an emergent technique for obtaining the posterior probability density function (PDF) is known as particle filtering (PF). PF is based on Monte Carlo simulations with sequential importance sampling (SIS). These methods allow for a complete representation of the posterior distribution of the states using sequential importance sampling and resampling for the various probability densities. Since the true posterior PDF embodies all the available statistical information about the channel estimates, PF is optimal in the sense that all the available information has been used. The posterior density By computing the filtering density recursively, we do not need to keep track of the complete history of the states. Therefore, from a storage point of view, the filtering density is more parsimonious than the full posterior density function. If we know the filtering density, we can easily derive various estimates of the system's states including means, modes, medians and confidence intervals. We show how the filtering density may be approximated using sequential importance sampling techniques. Figure 7 . The recursive computation of the filtering density. The filtering density is estimated recursively in two stages: prediction and update (correction), as illustrated in Figure 7 . In the prediction step, the filtering density is propagated into the future via the transition density as follows, The transition density is defined in terms of the probabilistic model governing the states' evolution (23) and the process noise statistics. The update stage involves the application of Bayes' rule when new data is observed BIB003 0: 1 0: called a proposal distribution that is a probability distribution from which we can easily sample . The selection of the proposal function is one of the most critical design issues in importance sampling algorithms and is the source of the main concern. The more accurate the proposal is to the true posterior, the better the performance of the particle filter is. It is often convenient to choose the proposal distribution to be the prior : (29) Again, we choose the stochastic model given by (23) as our model for the proposal distribution. As a result of not incorporating the most recent observations, this would seem to be the most common choice of proposal distribution since it is intuitive and can be implemented easily. This has the effect of simplifying (28) to: A common problem with the SIS particle filter is the degeneracy phenomenon, where after a few iterations, all but one particle will have negligible weights. It has been shown BIB001 that the variance of the importance weights can only increase over time, and thus it is impossible to avoid the degeneracy phenomenon. A large number of samples are thus effectively removed from the sample set because their importance weights become numerically insignificant. To avoid this degeneracy, a resampling stage may be used to eliminate samples with low importance weights and multiply samples with high importance weights. A common heuristic used to maintain an appropriate number of particles is to first calculate the effective sample size eff N introduced in , and defined as: We have so far explained how to compute the importance weights sequentially and how to improve the sample set by resampling. The essential structure of the PF to clock offset estimation using the proposal function BIB004 can now be presented in terms of the following pseudo-code. Step.1) Prediction : predict via the state model (23) ( ) 1: Step.2) Measurement Update : Evaluate the weights according to the likelihood function as (30), 1: Step. where the probability to take sample i Step. Output : Step.5) Continue: set 1 k k and iterate to Step. 2. Finally, we now introduce the PF with Bootstrap Sampling (BS) approach that integrates the PF with the BS for estimating the clock offset. The basic idea is quiet straightforward. In order to provide a large amount of observation data, we generate sampled observation data from the original observation data set by using the BS procedure. Then, we estimate the clock offset based on the PF. The important thing to check is how close the PDF of sampled data is to the true PDF. However, in case of less observation data, the performance's limitation is related to the finite number of observation data. Therefore, the solution is to overcome this limitation in the presence of reduced number of observation data. BS assumes additional data samples relative to the original data samples; these additional samples are defined by drawing at random with replacement. Each of the bootstrap samples is considered as new data. Based on the BS, we will increase the observation data set. Given a large number of new observation data, we can then approximate the clock offset by using PF. The following pseudo-code describes the procedure for estimating the clock offset via the nonparametric bootstrap sampling method.
|
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Delay Measurement Time Synchronization for Wireless Sensor Networks (DMTS) [42] <s> Amino acid fermentation is conducted by fermenting bacterial cells in a culture medium in a fermentor and separating fermentation solution withdrawn from the fermentor into a solution containing said bacterial cells and a solution not containing bacterial cells by a cell separator. The solution containing said bacterial cells being circulated from said cell separator to said fermenter by circulating means to perform amino acid fermentation continuously, and bubbles being removed from said fermentation solution by a bubble separator before said fermentation solution is fed to said circulating means and said cell separator. <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Delay Measurement Time Synchronization for Wireless Sensor Networks (DMTS) [42] <s> Abstract Recent advances in micro-electromechanical (MEMS) technology have led to the development of small, low-cost, and low-power sensors. Wireless sensor networks (WSNs) are large-scale networks of such sensors, dedicated to observing and monitoring various aspects of the physical world. In such networks, data from each sensor is agglomerated using data fusion to form a single meaningful result, which makes time synchronization between sensors highly desirable. This paper surveys and evaluates existing clock synchronization protocols based on a palette of factors like precision, accuracy, cost, and complexity. The design considerations presented here can help developers either in choosing an existing synchronization protocol or in defining a new protocol that is best suited to the specific needs of a sensor-network application. Finally, the survey provides a valuable framework by which designers can compare new and existing synchronization protocols. <s> BIB002
|
DMTS relies on a master-slave synchronization, sender-receiver synchronization, and clockcorrection approach. This protocol was developed due to the need to develop a more suitable time synchronization method that avoids round trip time estimation. DMTS synchronizes the sender and multiple receivers at the same time and requires less number of message transfers than RBS. One of the characteristics of sensor networks is their self-organization and dynamic behavior. The selforganization feature implies that the network topology may change from time to time. DMTS focuses on scalability and flexibility, which means being either adaptive or insensitive to changes in network topology. In this protocol, a leader is chosen as time master and broadcasts its time. All receivers measure the time delay and set their time as received master time plus measured time transfer delay. As a result, all sensors receiving the time synchronization message can be synchronized with the leader. The time synchronization precision is bounded mainly to how well the delay measurements are along the path. where e t is the estimated time to transit the preamble and start symbols, 1 t and 2 t are receiver timestamps. Since a radio device has a fixed transmit rate, for instance, Mica radios transmit preamble and start symbols at the rate of 20 kbps, e t is a fixed delay and is expressed as e t n , where n stands for the number of bits to transmit and denotes the time to transmit one bit over radio. In the DMTS method, a time synchronization leader sends a time synchronization message with its timestamp t , which is added after MAC delay and a clear channel is detected. The receiver calculates the path delay and adjusts its local clock to r t : The receiver is then synchronized with the leader. The lower bound of DMTS is the radio device synchronization precision, and the upper bound is the accuracy of local clock. Since DMTS needs only one-time signal transfer to synchronize all nodes within a single hop, it is energy efficient. It is also lightweight because there are no complex operations involved. Multi-hop synchronization is also possible. If a node knows that it has children nodes, it broadcasts a time signal after it adjusts its own time. The node can now synchronize with its children by using single-hop time communication with a known-leader. To handle the situation when network nodes have no knowledge about their children, the concept of a time-source level is used to identify the network distance of a node from the master, which is selected by means of a leader selection algorithm. DMTS uses the concept of time source level to identify the distance from the master to another node. A time master assumes the time source level 0. A node synchronized with a level n receives a time source level 1 n . The root node broadcasts its time periodically and the synchronized nodes also do the same thing. On receiving a time signal, a node checks the time source level. If it is from a source of lower level than itself, it accepts the time; otherwise, it discards the signal. In this way, DMTS guarantees that the master time will be propagated to all network nodes with the number of broadcastings being equal to the number of the nodes. In addition, the algorithm warrants the shortest path to the time master, or the least number of hops, because a node always selects the node that is nearest to the time leader as its parent. DMTS exhibits the following advantages BIB002 BIB001 : A user application interface is provided to monitor a wireless sensor network at run-time. Computational complexity is low and energy efficiency is quite high. On the other hand, the disadvantages of the DMTS protocol are as follows BIB002 BIB001 : DMTS can be applied only to low resolution, low frequency external clocks. Synchronization precision is traded for the sake of low computational complexity and energy efficiency.
|
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Flooding Time Synchronization Protocol (FTSP) [46] <s> Recent advances in miniaturization and low-cost, low-power design have led to active research in large-scale networks of small, wireless, low-power sensors and actuators. Time synchronization is critical in sensor networks for diverse purposes including sensor data fusion, coordinated actuation, and power-efficient duty cycling. Though the clock accuracy and precision requirements are often stricter than in traditional distributed systems, strict energy constraints limit the resources available to meet these goals.We present Reference-Broadcast Synchronization, a scheme in which nodes send reference beacons to their neighbors using physical-layer broadcasts. A reference broadcast does not contain an explicit timestamp; instead, receivers use its arrival time as a point of reference for comparing their clocks. In this paper, we use measurements from two wireless implementations to show that removing the sender's nondeterminism from the critical path in this way produces high-precision clock agreement (1.85± 1.28μsec, using off-the-shelf 802.11 wireless Ethernet), while using minimal energy. We also describe a novel algorithm that uses this same broadcast property to federate clocks across broadcast domains with a slow decay in precision (3.68± 2.57μsec after 4 hops). RBS can be used without external references, forming a precise relative timescale, or can maintain microsecond-level synchronization to an external timescale such as UTC. We show a significant improvement over the Network Time Protocol (NTP) under similar conditions. <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Flooding Time Synchronization Protocol (FTSP) [46] <s> Wireless ad-hoc sensor networks have emerged as an interesting and important research area in the last few years. The applications envisioned for such networks require collaborative execution of a distributed task amongst a large set of sensor nodes. This is realized by exchanging messages that are time-stamped using the local clocks on the nodes. Therefore, time synchronization becomes an indispensable piece of infrastructure in such systems. For years, protocols such as NTP have kept the clocks of networked systems in perfect synchrony. However, this new class of networks has a large density of nodes and very limited energy resource at every node; this leads to scalability requirements while limiting the resources that can be used to achieve them. A new approach to time synchronization is needed for sensor networks.In this paper, we present Timing-sync Protocol for Sensor Networks (TPSN) that aims at providing network-wide time synchronization in a sensor network. The algorithm works in two steps. In the first step, a hierarchical structure is established in the network and then a pair wise synchronization is performed along the edges of this structure to establish a global timescale throughout the network. Eventually all nodes in the network synchronize their clocks to a reference node. We implement our algorithm on Berkeley motes and show that it can synchronize a pair of neighboring motes to an average accuracy of less than 20ms. We argue that TPSN roughly gives a 2x better performance as compared to Reference Broadcast Synchronization (RBS) and verify this by implementing RBS on motes. We also show the performance of TPSN over small multihop networks of motes and use simulations to verify its accuracy over large-scale networks. We show that the synchronization accuracy does not degrade significantly with the increase in number of nodes being deployed, making TPSN completely scalable. <s> BIB002 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Flooding Time Synchronization Protocol (FTSP) [46] <s> Wireless sensor network applications, similarly to other distributed systems, often require a scalable time synchronization service enabling data consistency and coordination. This paper describes the Flooding Time Synchronization Protocol (FTSP), especially tailored for applications requiring stringent precision on resource limited wireless platforms. The proposed time synchronization protocol uses low communication bandwidth and it is robust against node and link failures. The FTSP achieves its robustness by utilizing periodic flooding of synchronization messages, and implicit dynamic topology update. The unique high precision performance is reached by utilizing MAC-layer time-stamping and comprehensive error compensation including clock skew estimation. The sources of delays and uncertainties in message transmission are analyzed in detail and techniques are presented to mitigate their effects. The FTSP was implemented on the Berkeley Mica2 platform and evaluated in a 60-node, multi-hop setup. The average per-hop synchronization error was in the one microsecond range, which is markedly better than that of the existing RBS and TPSN algorithms. <s> BIB003
|
The aim of the FTSP is to attain a network wide synchronization of the local clocks of participating nodes by using multi-hop synchronization. It is assumed that every node has a local clock exhibiting the typical timing errors of crystals and can communicate over an unreliable but error corrected wireless channel to its neighbor nodes. FTSP synchronizes the time of a sender to possibly multiple receivers making use of a single radio message time-stamped at both the sender and the receiver sides. MAC layer time-stamping can eliminate many of the errors, as shown in TPSN BIB002 . However, accurate clock-synchronization at discrete points in time is a partial solution only and thus compensation for the clock drift of the nodes is necessary for obtaining high precision in-between synchronization points and to keep the communication overhead low. Linear regression is used in this protocol to compensate for clock drift, which is already suggested in RBS BIB001 . As mentioned above, FTSP provides multi-hop synchronization. The root of the network -a single, dynamically elected node -keeps the global time and all other nodes synchronize their clocks to that of the root. The nodes form an ad-hoc structure to transfer the global time from the root to all the other nodes, as opposed to the fixed spanning-tree based approach proposed in BIB002 . This saves the initial phase of establishing the tree and is more robust against node and link failures, and changes in network topology BIB003 .
|
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Probabilistic Clock Synchronization [44] <s> Presents and analyzes a new probabilistic clock synchronization algorithm that can guarantee a much smaller bound on the clock skew than most existing algorithms. The algorithm is probabilistic in the sense that the bound on the clock skew that it guarantees has a probability of invalidity associated with it. However, the probability of invalidity may be made extremely small by transmitting a sufficient number of synchronization messages. It is shown that an upper bound on the probability of invalidity decreases exponentially with the number of synchronization messages transmitted. A closed-form expression that relates the probability of invalidity to the clock skew and the number of synchronization messages is also derived. > <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Probabilistic Clock Synchronization [44] <s> Recent advances in miniaturization and low-cost, low-power design have led to active research in large-scale networks of small, wireless, low-power sensors and actuators. Time synchronization is critical in sensor networks for diverse purposes including sensor data fusion, coordinated actuation, and power-efficient duty cycling. Though the clock accuracy and precision requirements are often stricter than in traditional distributed systems, strict energy constraints limit the resources available to meet these goals.We present Reference-Broadcast Synchronization, a scheme in which nodes send reference beacons to their neighbors using physical-layer broadcasts. A reference broadcast does not contain an explicit timestamp; instead, receivers use its arrival time as a point of reference for comparing their clocks. In this paper, we use measurements from two wireless implementations to show that removing the sender's nondeterminism from the critical path in this way produces high-precision clock agreement (1.85± 1.28μsec, using off-the-shelf 802.11 wireless Ethernet), while using minimal energy. We also describe a novel algorithm that uses this same broadcast property to federate clocks across broadcast domains with a slow decay in precision (3.68± 2.57μsec after 4 hops). RBS can be used without external references, forming a precise relative timescale, or can maintain microsecond-level synchronization to an external timescale such as UTC. We show a significant improvement over the Network Time Protocol (NTP) under similar conditions. <s> BIB002 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Probabilistic Clock Synchronization [44] <s> We study the performance of a class of time-offset estimation algorithms for synchronization of master-slave nodes based on asynchronous transfer of timing cells when GPS is not used. We implement a synchronization control mechanism based on cell acknowledgment time-out (TO) with wait or no wait options. We analyze the mechanism reliability and performance parameters over symmetric links using an exponential cell delay variation model. We show that the maximum-likelihood offset estimator does not exist for the exponential likelihood function. We analytically provide RMS error result comparisons for five ad hoc offset estimation algorithms: the median round delay, the minimum round delay, the minimum link delay (MnLD), the median phase, and the average phase. We show that the MnLD algorithm achieves the best accuracy over symmetric links without having to impose a strict TO control, which substantially speeds up the algorithm. We also discuss an open-loop estimation updating mechanism based on standard clock models. <s> BIB003 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Probabilistic Clock Synchronization [44] <s> Abstract Recent advances in micro-electromechanical (MEMS) technology have led to the development of small, low-cost, and low-power sensors. Wireless sensor networks (WSNs) are large-scale networks of such sensors, dedicated to observing and monitoring various aspects of the physical world. In such networks, data from each sensor is agglomerated using data fusion to form a single meaningful result, which makes time synchronization between sensors highly desirable. This paper surveys and evaluates existing clock synchronization protocols based on a palette of factors like precision, accuracy, cost, and complexity. The design considerations presented here can help developers either in choosing an existing synchronization protocol or in defining a new protocol that is best suited to the specific needs of a sensor-network application. Finally, the survey provides a valuable framework by which designers can compare new and existing synchronization protocols. <s> BIB004
|
This protocol is an extension of the deterministic RBS protocol for providing probabilistic clock synchronization. Arvind BIB001 defined a probabilistic clock synchronization protocol for wired networks. However, most synchronization protocols are based exclusively on deterministic algorithms. Deterministic methods have an advantage that they usually guarantee an upper bound on the error in clock offset estimation. However, in case that the system resources are badly constrained, a guarantee on synchronization accuracy may result in a large number of messages being exchanged during synchronization. In these cases, probabilistic algorithms can provide reasonable synchronization precision with lower computational and network overhead than deterministic protocols. Elson et al. BIB002 found the distribution of the synchronization error among a set of receivers. Multiple messages are sent from the sender to the receivers. The difference in the actual reception times at the receivers is plotted. As each of these pulses are independently distributed, the difference in reception times yields a Gaussian distribution with zero mean. Given a Gaussian probability distribution for the synchronization error, it is possible to calculate the relationship between a given maximum error in synchronization and the probability of actually synchronizing with an error less than the maximum error. If max e is the maximum error allowed between two synchronizing nodes, then the probability of synchronizing with an error max e e is given by: Therefore, as the max e limit increases, the probability of failure max (1 (| | )) P e e decreases exponentially. Based on equation (34), PalChaudhuri et al. derived expressions converting the size of maximum clock synchronization error (service specifications) to the number of messages and the synchronization overhead (actual protocol parameter). The probability for the achieved error being less than the maximum specified error is given by: In equation BIB003 , n stands for the minimum number of synchronization messages to guarantee the minimum allowed error and denotes the standard deviation of the distribution. In , the relationship between the synchronization period and the maximum specified clock skew is also described. Given a maximum value for clock skew, a time period is derived within which resynchronization must be done: where max is the maximum allowable synchronization period at any point in time, sync T is the time period between synchronization points for the Always On model (time period of validity for Sensor Initiated model), is the maximum drift of the clock rate, and max is the maximum delay (after the synchronization procedure was started) in the time values of one receiver reaching another receiver . This algorithm can be possibly extended to create a probabilistic clock synchronization service between receivers that may be multiple hops away from a sender. This extension is in contrast to the multi-hop extension used in RBS BIB002 assuming that all sensor nodes are always within a single hop of at least one sender. Moreover, the RBS algorithm requires the existence of a node which is within the broadcast region of both senders. This algorithm does not assume such assumptions, and sensor nodes herein are allowed to be multiple hops away from a sender and still be synchronized with all other nodes within the nodes transmission range of nodes. The advantages of probabilistic clock synchronization service in sensor networks are as follows BIB004 : A probabilistic guarantee reduces both the number of messages exchanged among nodes and the computational load on each node. There is a tradeoff between synchronization accuracy and resource cost. This protocol supports multi-hop networks, which span several domains. However, this method also presents disadvantages BIB004 : In case of safety-critical applications (for example, nuclear plant monitoring), a probabilistic guarantee on accuracy may not be proper. The protocol is sensitive to message losses. Nevertheless, it does not consider provisions for message losses.
|
Clock Synchronization in Wireless Sensor Networks: An Overview <s> Time Diffusion Synchronization Protocol [43] <s> In the near future, small intelligent devices will be deployed in homes, plantations, oceans, rivers, streets, and highways to monitor the environment. These devices require time synchronization, so voice and video data from different sensor nodes can be fused and displayed in a meaningful way at the sink. Instead of time synchronization between just the sender and receiver or within a local group of sensor nodes, some applications require the sensor nodes to maintain a similar time within a certain tolerance throughout the lifetime of the network. The Time-Diffusion Synchronization Protocol (TDP) is proposed as a network-wide time synchronization protocol. It allows the sensor network to reach an equilibrium time and maintains a small time deviation tolerance from the equilibrium time. In addition, it is analytically shown that the TDP enables time in the network to converge. Also, simulations are performed to validate the effectiveness of TDP in synchronizing the time throughout the network and balancing the energy consumed by the sensor nodes. <s> BIB001 </s> Clock Synchronization in Wireless Sensor Networks: An Overview <s> Time Diffusion Synchronization Protocol [43] <s> Abstract Recent advances in micro-electromechanical (MEMS) technology have led to the development of small, low-cost, and low-power sensors. Wireless sensor networks (WSNs) are large-scale networks of such sensors, dedicated to observing and monitoring various aspects of the physical world. In such networks, data from each sensor is agglomerated using data fusion to form a single meaningful result, which makes time synchronization between sensors highly desirable. This paper surveys and evaluates existing clock synchronization protocols based on a palette of factors like precision, accuracy, cost, and complexity. The design considerations presented here can help developers either in choosing an existing synchronization protocol or in defining a new protocol that is best suited to the specific needs of a sensor-network application. Finally, the survey provides a valuable framework by which designers can compare new and existing synchronization protocols. <s> BIB002
|
TDP is a network-wide time synchronization protocol proposed by Su et al. BIB001 . Specifically, this protocol enables all the sensors in the network to have a local time that is within a small bounded time deviation from the network-wide equilibrium time. TDP architecture comprises many algorithms and procedures, which are used to autonomously synchronize the nodes, remove the false tickers (clocks deviating from those of the neighbors), and balance the load required for time synchronization among the sensor nodes. In the beginning, the sensor nodes may receive an Initialize pulse from the sink either through direct broadcast or multi-hop flooding. Then they determine for themselves to become master nodes with the election/reelection of master/diffused leader node procedure (ERP), which is composed of the false ticker isolation algorithm (FIA) and load distribution algorithm (LDA) . At the end of the ERP procedure, the elected master nodes start the peer evaluation procedure (PEP) while other nodes do nothing. PEP helps to eliminate false tickers from becoming master nodes or diffused leader nodes. After PEP, the elected master nodes start the time diffusion procedure (TP) through which they diffuse the timing information messages at every seconds for duration of seconds. Each neighbor node receiving these timing information messages self-determines to become a diffused leader node using the procedure ERP. Moreover, all neighbor nodes adjust their local clocks using the time adjustment algorithm (TAA) and the clock discipline algorithm (CDA) after waiting for seconds. The elected diffused leader nodes diffuse the timing information messages to their neighboring nodes located within their broadcast range. This diffusion procedure allows all nodes to be autonomously synchronized. Additionally, the master nodes are re-elected at every seconds using the ERP procedure. The following are the advantages of TDP BIB002 : This protocol is tolerant to message losses. A network-wide equilibrium time is achieved across all nodes and involves all the nodes in the synchronization process. The diffusion does not count on static level-by-level transmissions and thus it exhibits flexibility and fault-tolerance. The protocol is geared towards mobility. On the other hand, the disadvantages are as follows. The convergence time tends to be high in case that no external precise time servers are used. Clocks may run backward. This can happen whenever a clock value is suddenly adjusted to a lower value.
|
Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> Multidimensional scaling can be considered as involving three basic steps. In the first step, a scale of comparative distances between all pairs of stimuli is obtained. This scale is analogous to the scale of stimuli obtained in the traditional paired comparisons methods. In this scale, however, instead of locating each stimulus-object on a given continuum, the distances between each pair of stimuli are located on a distance continuum. As in paired comparisons, the procedures for obtaining a scale of comparative distances leave the true zero point undetermined. Hence, a comparative distance is not a distance in the usual sense of the term, but is a distance minus an unknown constant. The second step involves estimating this unknown constant. When the unknown constant is obtained, the comparative distances can be converted into absolute distances. In the third step, the dimensionality of the psychological space necessary to account for these absolute distances is determined, and the projections of stimuli on axes of this space are obtained. A set of analytical procedures was developed for each of the three steps given above, including a least-squares solution for obtaining comparative distances by the complete method of triads, two practical methods for estimating the additive constant, and an extension of Young and Householder's Euclidean model to include procedures for obtaining the projections of stimuli on axes from fallible absolute distances. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> Recent advances in radio and embedded systems have enabled the proliferation of wireless sensor networks. Wireless sensor networks are tremendously being used in different environments to perform various monitoring tasks such as search, rescue, disaster relief, target tracking and a number of tasks in smart environments. In many such tasks, node localization is inherently one of the system parameters. Node localization is required to report the origin of events, assist group querying of sensors, routing and to answer questions on the network coverage. So, one of the fundamental challenges in wireless sensor network is node localization. This paper reviews different approaches of node localization discovery in wireless sensor networks. The overview of the schemes proposed by different scholars for the improvement of localization in wireless sensor networks is also presented. Future research directions and challenges for improving node localization in wireless sensor networks are also discussed. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> Our deteriorating civil infrastructure faces the critical challenge of long-term structural health monitoring for damage detection and localization. In contrast to existing research that often separates the designs of wireless sensor networks and structural engineering algorithms, this paper proposes a cyber-physical co-design approach to structural health monitoring based on wireless sensor networks. Our approach closely integrates (1) flexibility-based damage localization methods that allow a tradeoff between the number of sensors and the resolution of damage localization, and (2) an energy-efficient, multi-level computing architecture specifically designed to leverage the multi-resolution feature of the flexibility-based approach. The proposed approach has been implemented on the Intel Imote2 platform. Experiments on a physical beam and simulations of a truss structure demonstrate the system's efficacy in damage localization and energy efficiency. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography, diffraction imaging, astronomical imaging, and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that a complex-valued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noise-aware algorithms are stable in the sense that the reconstruction degrades gracefully as the signal-to-noise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to... <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> We discuss the analysis and design of an Environmental Monitoring Application.The application is reliable and maintenance-free, runs in multihop wireless network.We analyze the different alternatives and tradeoffs, using open source software.The application is validated in long-term outdoor deployments with good results.Related work does not analyze the software design with open source. We discuss the entire process for the analysis and design of an Environmental Monitoring Application for Wireless Sensor Networks, using existing open source components to create the application. We provide a thorough study of the different alternatives, from the selection of the embedded operating system to the different algorithms and strategies. The application has been designed to gather temperature and relative humidity data following the rules of quality assurance for environmental measurements, suitable for use in both research and industry. The main features of the application are: (a) runs in a multihop low-cost network based on IEEE 802.15.4, (b) improved network reliability and lifetimes, (c) easy management and maintenance-free, (d) ported to different platforms and (e) allows different configurations and network topologies. The application has been tested and validated in several long-term outdoor deployments with very good results and the conclusions are aligned with the experimental evidence. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> Location awareness, providing ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of key ingredients for internet of things (IoT). In order to make a proper reaction to the collected information from devices, location information of things should be available at the data center. One challenge for the massive IoT networks is to identify the location map of whole sensor nodes from partially observed distance information. This is especially important for massive sensor networks, relay-based and hierarchical networks, and vehicular to everything (V2X) networks. The primary goal of this paper is to propose an algorithm to reconstruct the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in Riemannian manifold in which a notion of differentiability can be defined, we are able to solve the low-rank matrix completion problem efficiently using a modified conjugate gradient algorithm. From the analysis and numerical experiments, we show that the proposed method, termed localization in Riemannian manifold using conjugate gradient (LRM-CG), is effective in recovering the Euclidean distance matrix for both noiseless and noisy environments. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> I. INTRODUCTION <s> Location awareness, providing the ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of the key ingredients for the Internet of Things (IoT) era. In order to make a proper reaction to the collected information from things , location information of things should be available at the data center. One challenge for the IoT networks is to identify the location map of whole nodes from partially observed distance information. The aim of this paper is to present an algorithm to recover the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in a Riemannian manifold in which a notion of differentiability can be defined, we solve the low-rank matrix completion problem using a modified conjugate gradient algorithm. From the convergence analysis, we show that localization in Riemannian manifold using conjugate gradient (LRM-CG) converges linearly to the original Euclidean distance matrix under the extended Wolfe’s conditions. From the numerical experiments, we demonstrate that the proposed method, called LRM-CG, is effective in recovering the Euclidean distance matrix. <s> BIB007
|
In the era of big data, the low-rank matrix has become a useful and popular tool to express two-dimensional information. One well-known example is the rating matrix in the recommendation systems representing users' tastes on products [1] . Since users expressing similar ratings on multiple products tend to have the same interest for the new product, columns associated with users sharing the same interest are highly likely to be the same, resulting in the low rank structure of the rating matrix (see Fig. 1 ). Another example is the Euclidean distance matrix formed by the pairwise distances of a large number of sensor nodes. Since the rank of a Euclidean distance matrix in the k-dimensional Euclidean space is at most k + 2 (if k = 2, then the rank is 4), this matrix can be readily modeled as a low-rank matrix BIB002 - BIB007 . A holy grail of the low-rank matrix is that the essential information, expressed in terms of degree of freedom, in a The associate editor coordinating the review of this manuscript and approving it for publication was Congduan Li. matrix is much smaller than the total number of entries. Therefore, even though the number of observed entries is small, we still have a good chance to recover the whole matrix. There are a variety of scenarios where the number of observed entries of a matrix is tiny. In the recommendation systems, for example, users are recommended to submit the feedback in a form of rating number, e.g., 1 to 5 for the purchased product. However, users often do not want to leave a feedback and thus the rating matrix will have many missing entries. Also, in the internet of things (IoT) network, sensor nodes have a limitation on the radio communication range or under the power outage so that only small portion of entries in the Euclidean distance matrix is available. When there is no restriction on the rank of a matrix, the problem to recover unknown entries of a matrix from partial observed entries is ill-posed. This is because any value can be assigned to unknown entries, which in turn means that there are infinite number of matrices that agree with the observed entries. As a simple example, consider the following 2 × 2 matrix with one unknown entry marked ? If M is a full rank, i.e., the rank of M is two, then any value except 10 can be assigned to ?. Whereas, if M is a low-rank matrix (the rank is one in this trivial example), two columns differ by only a constant and hence unknown element ? can be easily determined using a linear relationship between two columns (? = 10). This example is obviously simple, but the fundamental principle to recover a large dimensional matrix is not much different from this and the low-rank constraint plays a pivotal role in recovering unknown entries of the matrix. Before we proceed, we discuss a few notable applications where the underlying matrix is modeled as a low-rank matrix. 1) Recommendation system: In 2006, the online DVD rental company Netflix announced a contest to improve the quality of the company's movie recommendation system. The company released a training set of half million customers. Training set contains ratings on more than ten thousands movies, each movie being rated on a scale from 1 to 5 [1] . The training data can be represented in a large dimensional matrix in which each column represents the rating of a customer for the movies. The primary goal of the recommendation system is to estimate the users' interests on products using the sparsely sampled 1 rating matrix. BIB002 Often, users sharing the same interests in key factors such as the type, the price, and the appearance of the product tend to provide the same rating on the movies. The ratings of those users might form a low-rank column space, FIGURE 2. Localization via LRMC BIB007 . The Euclidean distance matrix can be recovered with 92% of distance error below 0.5m using 30% of observed distances. resulting in the low-rank model of the rating matrix (see Fig. 1 ). 2) Phase retrieval: The problem to recover a signal not necessarily sparse from the magnitude observation is referred to as the phase retrieval. Phase retrieval is an important problem in X-ray crystallography and quantum mechanics since only the magnitude of the Fourier transform is measured in these applications BIB004 . Suppose the unknown time-domain signal m = [m 0 · · · m n−1 ] is acquired in a form of the measured magnitude of the Fourier transform. That is, m t e −j2π ωt/n , ω ∈ , where is the set of sampled frequencies. Further, let M = mm H where m H is the conjugate transpose of m. Then, (2) can be rewritten as where F w = f w f H w is the rank-1 matrix of the waveform f ω . Using this simple transform, we can express the quadratic magnitude |z ω | 2 as linear measurement of M. In essence, the phase retrieval problem can be converted to the problem to reconstruct the rank-1 matrix M in the positive semi-definite (PSD) cone BIB006 BIB004 : min X rank(X) subject to M, F ω = |z ω | 2 , ω ∈ X 0. 3) Localization in IoT networks: In recent years, internet of things (IoT) has received much attention for its plethora of applications such as healthcare, automatic metering, environmental monitoring (temperature, pressure, moisture), and surveillance BIB002 , BIB005 , BIB003 . Since the action in IoT networks, such as fire alarm, command broadcasting, or emergency request, is made primarily on the data center, data center should figure out the location information of whole devices in the networks. Also, in the wireless energy harvesting systems, accurate location information is crucial to improve the efficiency of wireless power transfer. In this scheme, called network localization (a.k.a. cooperative localization), each sensor node measures the distance information of adjacent nodes and then sends it to the data center. Then the data center constructs a map of sensor nodes using the collected distance information BIB001 . Due to various reasons, such as the power outage of a sensor node or the limitation of radio communication range (see Fig. 2 ), only small number of distance information is available at the data center. Also, in the vehicular networks, it is not easy to measure the distance of all adjacent vehicles when a vehicle is located at the dead zone. An example of the observed Euclidean distance matrix is ? ? where d ij is the pairwise distance between two sensor nodes i and j. Since the rank of Euclidean distance matrix M is at most k + 2 in the k-dimensional Euclidean space (k = 2 or k = 3) BIB006 , BIB007 , the problem to reconstruct M can be well-modeled as the LRMC problem. 4) Image compression and restoration: When there is dirt or scribble in a two-dimensional image (see Fig. 3 ), one simple solution is to replace the contaminated pixels with the interpolated version of adjacent pixels. A better way is to exploit intrinsic domination of a few singular values in an image. In fact, one can readily approximate an image to the low-rank matrix without perceptible loss of quality. By using clean (uncontaminated) pixels as observed entries, an original image can be recovered via the low-rank matrix completion.
|
Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Knowledge of accurate and timely channel state information (CSI) at the transmitter is becoming increasingly important in wireless communication systems. While it is often assumed that the receiver (whether base station or mobile) needs to know the channel for accurate power control, scheduling, and data demodulation, it is now known that the transmitter (especially the base station) can also benefit greatly from this information. For example, recent results in multiantenna multiuser systems show that large throughput gains are possible when the base station uses multiple antennas and a known channel to transmit distinct messages simultaneously and selectively to many single-antenna users. In time-division duplex systems, where the base station and mobiles share the same frequency band for transmission, the base station can exploit reciprocity to obtain the forward channel from pilots received over the reverse channel. Frequency-division duplex systems are more difficult because the base station transmits and receives on different frequencies and therefore cannot use the received pilot to infer anything about the multiantenna transmit channel. Nevertheless, we show that the time occupied in frequency-duplex CSI transfer is generally less than one might expect and falls as the number of antennas increases. Thus, although the total amount of channel information increases with the number of antennas at the base station, the burden of learning this information at the base station paradoxically decreases. Thus, the advantages of having more antennas at the base station extend from having network gains to learning the channel information. We quantify our gains using linear analog modulation which avoids digitizing and coding the CSI and therefore can convey information very rapidly and can be readily analyzed. The old paradigm that it is not worth the effort to learn channel information at the transmitter should be revisited since the effort decreases and the gain increases with the number of antennas. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? ::: ::: We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible; but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. ::: This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n x n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nr polylog(n). <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> A new algorithm, termed subspace evolution and transfer (SET), is proposed for solving the consistent matrix completion problem. In this setting, one is given a subset of the entries of a low-rank matrix, and asked to find one low-rank matrix consistent with the given observations. We show that this problem can be solved by searching for a column space that matches the observations. The corresponding algorithm consists of two parts — subspace evolution and subspace transfer. In the evolution part, we use a line search procedure to refine the column space. However, line search is not guaranteed to converge, as there may exist barriers along the search path that prevent the algorithm from reaching a global optimum. To address this problem, in the transfer part, we design mechanisms to detect barriers and transfer the estimated column space from one side of the barrier to the another. The SET algorithm exhibits excellent empirical performance for very low-rank matrices. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Motivated by the problem of learning a linear regression model whose parameter is a large fixed-rank non-symmetric matrix, we consider the optimization of a smooth cost function defined on the set of fixed-rank matrices. We adopt the geometric framework of optimization on Riemannian quotient manifolds. We study the underlying geometries of several well-known fixed-rank matrix factorizations and then exploit the Riemannian quotient geometry of the search space in the design of a class of gradient descent and trust-region algorithms. The proposed algorithms generalize our previous results on fixed-rank symmetric positive semidefinite matrices, apply to a broad range of applications, scale to high-dimensional problems and confer a geometric basis to recent contributions on the learning of fixed-rank non-symmetric matrices. We make connections with existing algorithms in the context of low-rank matrix completion and discuss relative usefulness of the proposed framework. Numerical experiments suggest that the proposed algorithms compete with the state-of-the-art and that manifold optimization offers an effective and versatile framework for the design of machine learning algorithms that learn a fixed-rank matrix. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Presents a list of articles published by the IEEE Signal Processing Society (SPS) that ranked among the top 100 most downloaded IEEE Xplore articles. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. <s> BIB007 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography, diffraction imaging, astronomical imaging, and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that a complex-valued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noise-aware algorithms are stable in the sense that the reconstruction degrades gracefully as the signal-to-noise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to... <s> BIB008 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets. <s> BIB009 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Multiple-input multiple-output (MIMO) systems with a large number of base station antennas, often called massive MIMO, have received much attention in academia and industry as a means to improve the spectral efficiency, energy efficiency, and processing complexity of next generation cellular systems. The mobile communication industry has initiated a feasibility study of massive MIMO systems to meet the increasing demand of future wireless systems. Field trials of the proof-of-concept systems have demonstrated the potential gain of the Full-Dimension MIMO (FD-MIMO), an official name for the MIMO enhancement in the 3rd generation partnership project (3GPP). 3GPP initiated standardization activity for the seamless integration of this technology into current 4G LTE systems. In this article, we provide an overview of FD-MIMO systems, with emphasis on the discussion and debate conducted on the standardization process of Release 13. We present key features for FD-MIMO systems, a summary of the major issues for the standardization and practical system design, and performance evaluations for typical FD-MIMO scenarios. <s> BIB010 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Channel state information at the transmitter (CSIT) is essential for frequency-division duplexing (FDD) massive MIMO systems, but conventional solutions involve overwhelming overhead both for downlink channel training and uplink channel feedback. In this letter, we propose a joint CSIT acquisition scheme to reduce the overhead. Particularly, unlike conventional schemes where each user individually estimates its own channel and then feed it back to the base station (BS), we propose that all scheduled users directly feed back the pilot observation to the BS, and then joint CSIT recovery can be realized at the BS. We further formulate the joint CSIT recovery problem as a low-rank matrix completion problem by utilizing the low-rank property of the massive MIMO channel matrix, which is caused by the correlation among users. Finally, we propose a hybrid low-rank matrix completion algorithm based on the singular value projection to solve this problem. Simulations demonstrate that the proposed scheme can provide accurate CSIT with lower overhead than conventional schemes. <s> BIB011 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> This paper proposes a new framework for the design of transmit and receive beamformers for interference alignment (IA) without symbol extensions in multi-antenna cellular networks. We consider IA in a $G$ cell network with $K$ users/cell, $N$ antennas at each base station (BS) and $M$ antennas at each user. The proposed framework is developed by recasting the conditions for IA as two sets of rank constraints, one on the rank of interference matrices, and the other on the transmit beamformers in the uplink. The interference matrix consists of all the interfering vectors received at a BS from the out-of-cell users in the uplink. Using these conditions and the crucial observation that the rank of interference matrices under alignment can be determined beforehand, this paper develops two sets of algorithms for IA. The first part of this paper develops rank minimization algorithms for IA by iteratively minimizing a weighted matrix norm of the interference matrix. Different choices of matrix norms lead to reweighted nuclear norm minimization (RNNM) or reweighted Frobenius norm minimization (RFNM) algorithms with significantly different per-iteration complexities. Alternately, the second part of this paper devises an alternating minimization (AM) algorithm where the rank-deficient interference matrices are expressed as a product of two lower-dimensional matrices that are then alternately optimized. Simulation results indicate that RNNM, which has a per-iteration complexity of a semidefinite program, is effective in designing aligned beamformers for proper-feasible systems with or without redundant antennas, while RFNM and AM, which have a per-iteration complexity of a quadratic program, are better suited for systems with redundant antennas. <s> BIB012 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> An F-RAN is presented in this article as a promising paradigm for the fifth generation wireless communication system to provide high spectral and energy efficiency. The core idea is to take full advantage of local radio signal processing, cooperative radio resource management, and distributed storing capabilities in edge devices, which can decrease the heavy burden on fronthaul and avoid large-scale radio signal processing in the centralized baseband unit pool. This article comprehensively presents the system architecture and key techniques of F-RANs. In particular, key techniques and their corresponding solutions, including transmission mode selection and interference suppression, are discussed. Open issues in terms of edge caching, software-defined networking, and network function virtualization are also identified. <s> BIB013 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> In this paper, we present a flexible low-rank matrix completion (LRMC) approach for topological interference management (TIM) in the partially connected $K$ -user interference channel. No channel state information (CSI) is required at the transmitters except the network topology information. The previous attempt on the TIM problem is mainly based on its equivalence to the index coding problem, but so far only a few index coding problems have been solved. In contrast, in this paper, we present an algorithmic approach to investigate the achievable degrees-of-freedom (DoFs) by recasting the TIM problem as an LRMC problem. Unfortunately, the resulting LRMC problem is known to be NP-hard, and the main contribution of this paper is to propose a Riemannian pursuit (RP) framework to detect the rank of the matrix to be recovered by iteratively increasing the rank. This algorithm solves a sequence of fixed-rank matrix completion problems. To address the convergence issues in the existing fixed-rank optimization methods, the quotient manifold geometry of the search space of fixed-rank matrices is exploited via Riemannian optimization. By further exploiting the structure of the low-rank matrix varieties, i.e., the closure of the set of fixed-rank matrices, we develop an efficient rank increasing strategy to find good initial points in the procedure of rank pursuit. Simulation results demonstrate that the proposed RP algorithm achieves a faster convergence rate and higher achievable DoFs for the TIM problem compared with the state-of-the-art methods. <s> BIB014 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> The upcoming big data era is likely to demand tremendous computation and storage resources for communications. By pushing computation and storage to network edges, fog radio access networks (Fog-RAN) can effectively increase network throughput and reduce transmission latency. Furthermore, we can exploit the benefits of cache enabled architecture in Fog-RAN to deliver contents with low latency. Radio access units (RAUs) need content delivery from fog servers through wireline links whereas multiple mobile devices acquire contents from RAUs wirelessly. This work proposes a unified low-rank matrix completion (LRMC) approach to solving the content delivery problem in both wireline and wireless parts of Fog-RAN. To attain a low caching latency, we present a high precision approach with Riemannian trust-region method to solve the challenging LRMC problem by exploiting the quotient manifold geometry of fixed-rank matrices. Numerical results show that the new approach has a faster convergence rate, is able to achieve optimal results, and outperforms other state-of-art algorithms. <s> BIB015 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> Low-rank matrices play a fundamental role in modeling and computational methods for signal processing and machine learning. In many applications where low-rank matrices arise, these matrices cannot be fully sampled or directly observed, and one encounters the problem of recovering the matrix given only incomplete and indirect observations. This paper provides an overview of modern techniques for exploiting low-rank structure to perform matrix recovery in these settings, providing a survey of recent advances in this rapidly-developing field. Specific attention is paid to the algorithms most commonly used in practice, the existing theoretical guarantees for these algorithms, and representative practical applications of these techniques. <s> BIB016 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 5) Massive multiple-input multiple-output (MIMO): <s> We consider the problem of channel estimation for millimeter wave (mmWave) systems, where, to minimize the hardware complexity and power consumption, an analog transmit beamforming and receive combining structure with only one radio frequency (RF) chain at the base station (BS) and mobile station (MS) is employed. Most existing works for mmWave channel estimation exploit sparse scattering characteristics of the channel. In addition to sparsity, mmWave channels may exhibit angular spreads over the angle of arrival (AoA), angle of departure (AoD), and elevation domains. In this paper, we show that angular spreads give rise to a useful low-rank structure that, along with the sparsity, can be simultaneously utilized to reduce the sample complexity, i.e. the number of samples needed to successfully recover the mmWave channel. Specifically, to effectively leverage the joint sparse and low-rank structure, we develop a two-stage compressed sensing method for mmWave channel estimation, where the sparse and low-rank properties are respectively utilized in two consecutive stages, namely, a matrix completion stage and a sparse recovery stage. Our theoretical analysis reveals that the proposed two-stage scheme can achieve a lower sample complexity than a direct compressed sensing method that exploits only the sparse structure of the mmWave channel. Simulation results are provided to corroborate our theoretical results and to show the superiority of the proposed two-stage method. <s> BIB017
|
By exploiting hundreds of antennas at the basestation (BS), massive MIMO can offer a large gain in capacity. In order to maximize the performance gain of the massive MIMO systems, the channel state information at the transmitter (CSIT) is required BIB010 . One way to acquire the CSIT is to let each user directly feed back its own pilot observation to BS for the joint CSIT estimation of all users BIB011 . In this setup, the MIMO channel matrix H can be reconstructed in two steps: 1) finding the pilot matrix Y using the least squares (LS) estimation or linear minimum mean square error (LMMSE) estimation and 2) reconstructing H using the model Y = H where each column of is the pilot signal from one antenna at BS BIB001 , BIB006 . Since the number of resolvable paths P is limited in most cases, one can readily assume that rank(H) ≤ P BIB011 . In the massive MIMO systems, P is often much smaller than the dimension of H due to the limited number of clusters around BS. Thus, the problem to recover H at BS can be solved via the rank minimization problem subject to the linear constraint Y = H BIB006 . Other than these, there are a bewildering variety of applications of LRMC in wireless communication, such as millimeter wave (mmWave) channel estimation BIB007 , BIB017 , topological interference management (TIM) BIB014 - BIB012 and mobile edge caching in fog radio access networks (Fog-RAN) BIB013 , BIB015 . The paradigm of LRMC has received much attention ever since the works of Fazel , Candes and Recht BIB002 , and Candes and Tao BIB003 . Over the years, there have been lots of works on this topic BIB008 , BIB005 , BIB004 , BIB009 , but it might not be easy to grasp the essentials of LRMC from these studies. One reason is because many of these works are highly theoretical and based on random matrix theory, graph theory, manifold analysis, and convex optimization so that it is not easy to grasp the essential knowledge from these studies. Another reason is because most of these works are proposals of new LRMC technique so that it is difficult to catch a general idea and big picture of LRMC from these. The primary goal of this paper is to provide a contemporary survey on LRMC, a new paradigm to recover unknown entries of a low-rank matrix from partial observations. To provide better view, insight, and understanding of the potentials and limitations of LRMC to researchers and practitioners in a friendly way, we present early scattered results in a structured and accessible way. Firstly, we classify the stateof-the-art LRMC techniques into two main categories and then explain each category in detail. Secondly, we present issues to be considered when using LRMC techniques. Specifically, we discuss the intrinsic properties required for low-rank matrix recovery and explain how to exploit a special structure, such as positive semidefinite-based structure, Euclidean distance-based structure, and graph structure, in LRMC design. Thirdly, we compare the recovery performance and the computational complexity of LRMC techniques via numerical simulations. We conclude the paper by commenting on the choice of LRMC techniques and providing future research directions. Recently, there have been a few overview papers on LRMC. An overview of LRMC algorithms and their performance guarantees can be found in BIB016 . A survey with an emphasis on first-order LRMC techniques together with their computational efficiency is presented in . Our work is clearly distinct from the previous studies in several aspects. Firstly, we categorize the state-of-the-art LRMC techniques into two classes and then explain details of each class, which can help researchers to easily catch technique useful for the given problem setup. Secondly, we provide a comprehensive survey of LRMC techniques and also provide extensive simulation results on the recovery quality and the running time complexity from which one can easily see the pros and cons of each LRMC technique and also gain a better insight into the choice of LRMC algorithms. Finally, we discuss how to exploit a special structure of a low-rank matrix in the LRMC algorithm design. In particular, we introduce the CNN-based LRMC algorithm that exploits the graph structure of a low-rank matrix. We briefly summarize notations used in this paper. • For a vector a ∈ R n , diag(a) ∈ R n×n is the diagonal matrix formed by a. • For a matrix A ∈ R n 1 ×n 2 , a i ∈ R n 1 is the i-th column of A. • rank(A) is the rank of A. and A B are the inner product and the Hadamard product (or element-wise multiplication) of two matrices A and B, respectively, where tr(·) denotes the trace operator. • A , A * , and A F stand for the spectral norm (i.e., the largest singular value), the nuclear norm (i.e., the sum of singular values), and the Frobenius norm of A, respectively. -dimensional matrices with entries being zero and one, respectively. • If A is a square matrix (i.e., n 1 = n 2 = n), diag(A) ∈ R n is the vector formed by the diagonal entries of A. • vec(X) is the vectorization of X.
|
Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> A primal-dual infeasible-interior-point path-following algorithm is proposed for solving semidefinite programming (SDP) problems. If the problem has a solution, then the algorithm is globally convergent. If the starting point is feasible or close to being feasible, the algorithm finds an optimal solution in at most $O(\sqrt{n}L)$ iterations, where n is the size of the problem and L is the logarithm of the ratio of the initial error and the tolerance. If the starting point is large enough, then the algorithm terminates in at most O(nL) steps either by finding a solution or by determining that the primal-dual problem has no solution of norm less than a given number. Moreover, we propose a sufficient condition for the superlinear convergence of the algorithm. In addition, we give two special cases of SDP for which the algorithm is quadratically convergent. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> An available pressurized air source, such as an inflated tire, connectable by appropriate conduit means, with flow control and pressure regulation provisions, through an air transmitter or face mask, to the breathing passages of a passenger in a submerged land vehicle to either provide emergency breathing air for the passenger, or to fill an inflatable and portable air pack which the passenger may leave the vehicle with, or both. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semi-definite programming, and nonconvex and nonlinear problems, have reached varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semi-definite programming, monotone linear complementarity, and convex programming over sets that can be characterized by self-concordant barrier functions. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? ::: ::: We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. <s> BIB007 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) NUCLEAR NORM MINIMIZATION (NNM) <s> Location awareness, providing the ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of the key ingredients for the Internet of Things (IoT) era. In order to make a proper reaction to the collected information from things , location information of things should be available at the data center. One challenge for the IoT networks is to identify the location map of whole nodes from partially observed distance information. The aim of this paper is to present an algorithm to recover the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in a Riemannian manifold in which a notion of differentiability can be defined, we solve the low-rank matrix completion problem using a modified conjugate gradient algorithm. From the convergence analysis, we show that localization in Riemannian manifold using conjugate gradient (LRM-CG) converges linearly to the original Euclidean distance matrix under the extended Wolfe’s conditions. From the numerical experiments, we demonstrate that the proposed method, called LRM-CG, is effective in recovering the Euclidean distance matrix. <s> BIB008
|
Since the rank minimization problem (10) is NP-hard , it is computationally intractable when the dimension of a matrix is large. One common trick to avoid computational issue is to replace the non-convex objective function with its convex surrogate, meaning that to convert the combinatorial search problem into a convex optimization problem. There are two clear advantages in solving the convex optimization problem: 1) a local optimum solution is globally optimal and 2) there are many efficient polynomial-time convex optimization solvers (e.g., interior point method BIB004 and semi-definite programming (SDP) solver). In the LRMC problem, the nuclear norm X * , the sum of the singular values of X, has been widely used as a convex surrogate of rank(X) BIB006 : Indeed, it has been shown that the nuclear norm is the convex envelope (the ''best'' convex approximation) of the rank function on the set {X ∈ R n 1 ×n 2 : X ≤ 1} . BIB008 Note that the relaxation from the rank function to the nuclear norm is conceptually analogous to the relaxation from 0 -norm to it has been shown that if the observed entries of a rank r matrix M(∈ R n×n ) are suitably random and the number of observed entries satisfies where µ 0 is the largest coherence of M (see the definition in Subsection III-A.2), then M is the unique solution of the NNM problem (11) with overwhelming probability (see Appendix B). It is worth mentioning that the NNM problem in (11) can also be recast as a semidefinite program (SDP) as (see is the sequence of linear sampling matrices, and {b k } | | k=1 are the observed entries. The problem BIB007 can be solved by the offthe-shelf SDP solvers such as SDPT3 and SeDuMi BIB002 using interior-point methods - BIB001 . It has been shown that the computational complexity of SDP techniques is O(n 3 ) where n = max(n 1 , n 2 ) . Also, it has been shown that under suitable conditions, the output M of SDP satisfies M − M F ≤ in at most O(n ω log( 1 )) iterations where ω is a positive constant BIB003 . Alternatively, one can reconstruct M by solving the equivalent nonconvex quadratic optimization form of the NNM problem BIB005 . Note that this approach has computational benefit since the number of primal variables of NNM is reduced from n 1 n 2 to r(n 1 + n 2 ) (r ≤ min(n 1 , n 2 )). Interested readers may refer to BIB005 for more details.
|
Low-Rank Matrix Completion: A Contemporary Survey <s> 2) SINGULAR VALUE THRESHOLDING (SVT) <s> Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) SINGULAR VALUE THRESHOLDING (SVT) <s> This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices (X^k, Y^k) and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates X^k is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. We provide numerical examples in which 1,000 by 1,000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with linearized Bregman iterations for l1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) SINGULAR VALUE THRESHOLDING (SVT) <s> The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) SINGULAR VALUE THRESHOLDING (SVT) <s> This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography, diffraction imaging, astronomical imaging, and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that a complex-valued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noise-aware algorithms are stable in the sense that the reconstruction degrades gracefully as the signal-to-noise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to... <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) SINGULAR VALUE THRESHOLDING (SVT) <s> We consider the problem of channel estimation for millimeter wave (mmWave) systems, where, to minimize the hardware complexity and power consumption, an analog transmit beamforming and receive combining structure with only one radio frequency (RF) chain at the base station (BS) and mobile station (MS) is employed. Most existing works for mmWave channel estimation exploit sparse scattering characteristics of the channel. In addition to sparsity, mmWave channels may exhibit angular spreads over the angle of arrival (AoA), angle of departure (AoD), and elevation domains. In this paper, we show that angular spreads give rise to a useful low-rank structure that, along with the sparsity, can be simultaneously utilized to reduce the sample complexity, i.e. the number of samples needed to successfully recover the mmWave channel. Specifically, to effectively leverage the joint sparse and low-rank structure, we develop a two-stage compressed sensing method for mmWave channel estimation, where the sparse and low-rank properties are respectively utilized in two consecutive stages, namely, a matrix completion stage and a sparse recovery stage. Our theoretical analysis reveals that the proposed two-stage scheme can achieve a lower sample complexity than a direct compressed sensing method that exploits only the sparse structure of the mmWave channel. Simulation results are provided to corroborate our theoretical results and to show the superiority of the proposed two-stage method. <s> BIB005
|
While the solution of the NNM problem in (11) can be obtained by solving BIB003 , this procedure is computationally burdensome when the size of the matrix is large. As an effort to mitigate the computational burden, the singular value thresholding (SVT) algorithm has been proposed BIB002 . The key idea of this approach is to put the regularization term into the objective function of the NNM problem: where τ is the regularization parameter. In [33, Theorem 3.1], it has been shown that the solution to the problem BIB005 converges to the solution of the NNM problem as τ → ∞. BIB004 Let L(X, Y) be the Lagrangian function associated with BIB005 , i.e., where Y is the dual variable. Let X and Y be the primal and dual optimal solutions. Then, by the strong duality BIB001 , we have The SVT algorithm finds X and Y in an iterative fashion. Specifically, starting with Y 0 = 0 n 1 ×n 2 , SVT updates X k and Y k as where {δ k } k≥1 is a sequence of positive step sizes. Note that X k can be expressed as where (a) is because P (A), B = A, P (B) and (b) is because Y k−1 vanishes outside of (i.e., P (Y k−1 ) = Y k−1 ) by (17b). Due to the inclusion of the nuclear norm, finding out the solution X k of (18) seems to be difficult. However, thanks to the intriguing result of Cai et al., we can easily obtain the solution.
|
Low-Rank Matrix Completion: A Contemporary Survey <s> Theorem 1 ([33, Theorem 2.1]): Let Z be a matrix whose singular value decomposition (SVD) is <s> This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices (X^k, Y^k) and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates X^k is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. We provide numerical examples in which 1,000 by 1,000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with linearized Bregman iterations for l1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> Theorem 1 ([33, Theorem 2.1]): Let Z be a matrix whose singular value decomposition (SVD) is <s> Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics. In this paper we propose a simple and fast algorithm SVP (Singular Value Projection) for rank minimization under affine constraints (ARMP) and show that SVP recovers the minimum rank solution for affine constraints that satisfy a restricted isometry property (RIP). Our method guarantees geometric convergence rate even in the presence of noise and requires strictly weaker assumptions on the RIP constants than the existing methods. We also introduce a Newton-step for our SVP framework to speed-up the convergence with substantial empirical gains. Next, we address a practically important application of ARMP - the problem of low-rank matrix completion, for which the defining affine constraints do not directly obey RIP, hence the guarantees of SVP do not hold. However, we provide partial progress towards a proof of exact recovery for our algorithm by showing a more restricted isometry property and observe empirically that our algorithm recovers low-rank incoherent matrices from an almost optimal number of uniformly sampled entries. We also demonstrate empirically that our algorithms outperform existing methods, such as those of [5, 18, 14], for ARMP and the matrix completion problem by an order of magnitude and are also more robust to noise and sampling schemes. In particular, results show that our SVP-Newton method is significantly robust to noise and performs impressively on a more realistic power-law sampling scheme for the matrix completion problem. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> Theorem 1 ([33, Theorem 2.1]): Let Z be a matrix whose singular value decomposition (SVD) is <s> Matrices of low rank can be uniquely determined from fewer linear measurements, or entries, than the total number of entries in the matrix. Moreover, there is a growing literature of computationally efficient algorithms which can recover a low rank matrix from such limited information; this process is typically referred to as matrix completion. We introduce a particularly simple yet highly efficient alternating projection algorithm which uses an adaptive stepsize calculated to be exact for a restricted subspace. This method is proven to have near-optimal order recovery guarantees from dense measurement masks and is observed to have average case performance superior in some respects to other matrix completion algorithms for both dense measurement masks and entry measurements. In particular, this proposed algorithm is able to recover matrices from extremely close to the minimum number of measurements necessary. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> Theorem 1 ([33, Theorem 2.1]): Let Z be a matrix whose singular value decomposition (SVD) is <s> We discuss the analysis and design of an Environmental Monitoring Application.The application is reliable and maintenance-free, runs in multihop wireless network.We analyze the different alternatives and tradeoffs, using open source software.The application is validated in long-term outdoor deployments with good results.Related work does not analyze the software design with open source. We discuss the entire process for the analysis and design of an Environmental Monitoring Application for Wireless Sensor Networks, using existing open source components to create the application. We provide a thorough study of the different alternatives, from the selection of the embedded operating system to the different algorithms and strategies. The application has been designed to gather temperature and relative humidity data following the rules of quality assurance for environmental measurements, suitable for use in both research and industry. The main features of the application are: (a) runs in a multihop low-cost network based on IEEE 802.15.4, (b) improved network reliability and lifetimes, (c) easy management and maintenance-free, (d) ported to different platforms and (e) allows different configurations and network topologies. The application has been tested and validated in several long-term outdoor deployments with very good results and the conclusions are aligned with the experimental evidence. <s> BIB004
|
where D τ is the singular value thresholding operator defined as To conclude, the update equations for X k and Y k are given by One can notice from (21a) and (21b) that the SVT algorithm is computationally efficient since we only need the truncated SVD and elementary matrix operations in each iteration. Indeed, let r k be the number of singular values of Y k−1 being greater than the threshold τ . Also, we suppose {r k } converges to the rank of the original matrix, i.e., lim k→∞ r k = r. Then the computational complexity of SVT is O(rn 1 n 2 ). Note also that the iteration number to achieve the -approximation BIB004 is O( 1 √ ) BIB001 . In Table 1 , we summarize the SVT algorithm. For the details of the stopping criterion of SVT, see BIB001 Section 5] . Over the years, various SVT-based techniques have been proposed BIB003 , , BIB002 . In , an iterative matrix completion algorithm using the SVT-based operator called proximal operator has been proposed. Similar algorithms inspired by the iterative hard thresholding (IHT) algorithm in CS have also been proposed BIB003 , BIB002 .
|
Low-Rank Matrix Completion: A Contemporary Survey <s> 3) ITERATIVELY REWEIGHTED LEAST SQUARES (IRLS) MINIMIZATION <s> We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively low-rank solution. Under the assumption that the linear measurements fulfill a suitable generalization of the Null Space Property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best k-rank approximation. In certain relevant cases, for instance for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows to expedite the solution of the least squares problems required at each iteration. We present numerical experiments that confirm the robustness of the algorithm for the solution of matrix completion problems, and demonstrate its competitiveness with respect to other techniques proposed recently in the literature. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) ITERATIVELY REWEIGHTED LEAST SQUARES (IRLS) MINIMIZATION <s> We discuss the analysis and design of an Environmental Monitoring Application.The application is reliable and maintenance-free, runs in multihop wireless network.We analyze the different alternatives and tradeoffs, using open source software.The application is validated in long-term outdoor deployments with good results.Related work does not analyze the software design with open source. We discuss the entire process for the analysis and design of an Environmental Monitoring Application for Wireless Sensor Networks, using existing open source components to create the application. We provide a thorough study of the different alternatives, from the selection of the embedded operating system to the different algorithms and strategies. The application has been designed to gather temperature and relative humidity data following the rules of quality assurance for environmental measurements, suitable for use in both research and industry. The main features of the application are: (a) runs in a multihop low-cost network based on IEEE 802.15.4, (b) improved network reliability and lifetimes, (c) easy management and maintenance-free, (d) ported to different platforms and (e) allows different configurations and network topologies. The application has been tested and validated in several long-term outdoor deployments with very good results and the conclusions are aligned with the experimental evidence. <s> BIB002
|
Yet another simple and computationally efficient way to solve the NNM problem is the IRLS minimization technique BIB001 , . In essence, the NNM problem can be recast using the least squares minimization as where W = (XX T ) − 1 2 . It can be shown that (22) is equivalent to the NNM problem (11) since we have BIB001 The key idea of the IRLS technique is to find X and W in an iterative fashion. The update expressions are Note that the weighted least squares subproblem (24a) can be easily solved by updating each and every column of X k BIB001 . In order to compute W k , we need a matrix inversion (24b). To avoid the ill-behavior (i.e., some of the singular values of X k approach to zero), an approach to use the perturbation of singular values has been proposed BIB001 , . BIB002 By -approximation, we mean M − M * F ≤ where M is the reconstructed matrix and M * is the optimal solution of SVT. Similar to SVT, the computational complexity per iteration of the IRLS-based technique is O(rn 1 n 2 ). Also, IRLS requires O(log( 1 )) iterations to achieve the -approximation solution. We summarize the IRLS minimization technique in Table 2 .
|
Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> An available pressurized air source, such as an inflated tire, connectable by appropriate conduit means, with flow control and pressure regulation provisions, through an air transmitter or face mask, to the breathing passages of a passenger in a submerged land vehicle to either provide emergency breathing air for the passenger, or to fill an inflatable and portable air pack which the passenger may leave the vehicle with, or both. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> Algorithms to construct/recover low-rank matrices satisfying a set of linear equality constraints have important applications in many signal processing contexts. Recently, theoretical guarantees for minimum-rank matrix recovery have been proven for nuclear norm minimization (NNM), which can be solved using standard convex optimization approaches. While nuclear norm minimization is effective, it can be computationally demanding. In this work, we explore the use of the PowerFactorization (PF) algorithm as a tool for rank-constrained matrix recovery. Empirical results indicate that incremented-rank PF is significantly more successful than NNM at recovering low-rank matrices, in addition to being faster. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> We address the inverse problem that arises in compressed sensing of a low-rank matrix. Our approach is to pose the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition that provides an analogy between parsimonious representations of a sparse vector and a low-rank matrix. Efficient greedy algorithms to solve the inverse problem for the vector case are extended to the matrix case through this atomic decomposition. In particular, we propose an efficient and guaranteed algorithm named ADMiRA that extends CoSaMP, its analogue for the vector case. The performance guarantee is given in terms of the rank-restricted isometry property and bounds both the number of iterations and the error in the approximate solution for the general case where the solution is approximately low-rank and the measurements are noisy. With a sparse measurement operator such as the one arising in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. The numerical experiments for the matrix completion problem show that, although the measurement operator in this case does not satisfy the rank-restricted isometry property, ADMiRA is a competitive algorithm for matrix completion. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> This paper describes gradient methods based on a scaled metric on the Grassmann manifold for low-rank matrix completion. The proposed methods significantly improve canonical gradient methods, especially on ill-conditioned matrices, while maintaining established global convegence and exact recovery guarantees. A connection between a form of subspace iteration for matrix completion and the scaled gradient descent procedure is also established. The proposed conjugate gradient method based on the scaled gradient outperforms several existing algorithms for matrix completion and is competitive with recently proposed methods. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> The matrix completion problem consists of finding or approximating a low-rank matrix based on a few samples of this matrix. We propose a new algorithm for matrix completion that minimizes the least-square distance on the sampling set over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retraction-based optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this low-rank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach... <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. LRMC ALGORITHMS USING RANK INFORMATION <s> In this paper we develop a new framework that captures the common landscape underlying the common non-convex low-rank matrix problems including matrix sensing, matrix completion and robust PCA. In particular, we show for all above problems (including asymmetric cases): 1) all local minima are also globally optimal; 2) no high-order saddle points exists. These results explain why simple algorithms such as stochastic gradient descent have global converge, and efficiently optimize these non-convex objective functions in practice. Our framework connects and simplifies the existing analyses on optimization landscapes for matrix sensing and symmetric matrix completion. The framework naturally leads to new results for asymmetric matrix completion and robust PCA. <s> BIB007
|
In many applications such as localization in IoT networks, recommendation system, and image restoration, we encounter the situation where the rank of a desired matrix is known in advance. As mentioned, the rank of a Euclidean distance matrix in a localization problem is at most k + 2 (k is the dimension of the Euclidean space). In this situation, the LRMC problem can be formulated as a Frobenius norm minimization (FNM) problem: Due to the inequality of the rank constraint, an approach to use an approximate rank information (e.g., upper bound of the rank) has been proposed BIB003 . The FNM problem has two main advantages: 1) the problem is well-posed in the noisy scenario and 2) the cost function is differentiable so that various gradient-based optimization techniques (e.g., gradient descent, conjugate gradient, Newton methods, and manifold optimization) can be used to solve the problem. Over the years, various techniques to solve the FNM problem in BIB001 have been proposed BIB003 - BIB004 , BIB005 . The performance guarantee of the FNM-based techniques has also been provided - . It has been shown that under suitable conditions of the sampling ratio p = | |/(n 1 n 2 ) and the largest coherence µ 0 of M (see the definition in Subsection III-A.2), the gradient-based algorithms globally converges to M with high probability BIB007 . Well-known FNM-based LRMC techniques include greedy techniques BIB003 , alternating projection techniques BIB002 , and optimization over Riemannian manifold BIB006 . In this subsection, we explain these techniques in detail.
|
Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> This work concerns primal--dual interior-point methods for semidefinite programming (SDP) that use a search direction originally proposed by Helmberg et al. [SIAM J. Optim., 6 (1996), pp. 342--361] and Kojima, Shindoh, and Hara [SIAM J. Optim., 7 (1997), pp. 86--125.] and recently rediscovered by Monteiro [SIAM J. Optim., 7 (1997), pp. 663--678] in a more explicit form. In analyzing these methods, a number of basic equalities and inequalities were developed in [Kojima, Shindoh, and Hara] and also in [Monteiro] through different means and in different forms. ::: In this paper, we give a concise derivation of the key equalities and inequalities for complexity analysis along the exact line used in linear programming (LP), producing basic relationships that have compact forms almost identical to their counterparts in LP. We also introduce a new formulation of the central path and variable-metric measures of centrality. These results provide convenient tools for deriving polynomiality results for primal--dual algorithms extended from LP to SDP using the aforementioned and related search directions. We present examples of such extensions, including the long-step infeasible-interior-point algorithm of Zhang [SIAM J. Optim., 4 (1994), pp. 208--227]. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> Abstract Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix–vector multiplies with the sampling matrix. For compressible signals, the running time is just O ( N log 2 N ) , where N is the length of the signal. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> We address the inverse problem that arises in compressed sensing of a low-rank matrix. Our approach is to pose the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition that provides an analogy between parsimonious representations of a sparse vector and a low-rank matrix. Efficient greedy algorithms to solve the inverse problem for the vector case are extended to the matrix case through this atomic decomposition. In particular, we propose an efficient and guaranteed algorithm named ADMiRA that extends CoSaMP, its analogue for the vector case. The performance guarantee is given in terms of the rank-restricted isometry property and bounds both the number of iterations and the error in the approximate solution for the general case where the solution is approximately low-rank and the measurements are noisy. With a sparse measurement operator such as the one arising in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. The numerical experiments for the matrix completion problem show that, although the measurement operator in this case does not satisfy the rank-restricted isometry property, ADMiRA is a competitive algorithm for matrix completion. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> Our deteriorating civil infrastructure faces the critical challenge of long-term structural health monitoring for damage detection and localization. In contrast to existing research that often separates the designs of wireless sensor networks and structural engineering algorithms, this paper proposes a cyber-physical co-design approach to structural health monitoring based on wireless sensor networks. Our approach closely integrates (1) flexibility-based damage localization methods that allow a tradeoff between the number of sensors and the resolution of damage localization, and (2) an energy-efficient, multi-level computing architecture specifically designed to leverage the multi-resolution feature of the flexibility-based approach. The proposed approach has been implemented on the Intel Imote2 platform. Experiments on a physical beam and simulations of a truss structure demonstrate the system's efficacy in damage localization and energy efficiency. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> As a greedy algorithm to recover sparse signals from compressed measurements, orthogonal matching pursuit (OMP) algorithm has received much attention in recent years. In this paper, we introduce an extension of the OMP for pursuing efficiency in reconstructing sparse signals. Our approach, henceforth referred to as generalized OMP (gOMP), is literally a generalization of the OMP in the sense that multiple N indices are identified per iteration. Owing to the selection of multiple “correct” indices, the gOMP algorithm is finished with much smaller number of iterations when compared to the OMP. We show that the gOMP can perfectly reconstruct any K-sparse signals (K >; 1), provided that the sensing matrix satisfies the RIP with δNK <; [(√N)/(√K+3√N)]. We also demonstrate by empirical simulations that the gOMP has excellent recovery performance comparable to l1-minimization technique with fast processing speed and competitive computational complexity. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) GREEDY TECHNIQUES <s> Low rank matrix completion has been applied successfully in a wide range of machine learning applications, such as collaborative filtering, image inpainting and Microarray data imputation. However, many existing algorithms are not scalable to large-scale problems, as they involve computing singular value decomposition. In this paper, we present an efficient and scalable algorithm for matrix completion. The key idea is to extend the well-known orthogonal matching pursuit from the vector case to the matrix case. In each iteration, we pursue a rank-one matrix basis generated by the top singular vector pair of the current approximation residual and update the weights for all rank-one matrices obtained up to the current iteration. We further propose a novel weight updating rule to reduce the time and storage complexity, making the proposed algorithm scalable to large matrices. We establish the linear convergence of the proposed algorithm. The fast convergence is achieved due to the proposed construction of matrix bases and the estimation of the weights. We empirically evaluate the proposed algorithm on many real-world large-scale datasets. Results show that our algorithm is much more efficient than state-of-the-art matrix completion algorithms while achieving similar or better prediction performance. <s> BIB007
|
In recent years, greedy algorithms have been popularly used for LRMC due to the computational simplicity. In a nutshell, they solve the LRMC problem by making a heuristic decision at each iteration with a hope to find the right solution in the end. Let r be the rank of a desired low-rank matrix M ∈ R n×n and M = U V T be the singular value decomposition of M where U, V ∈ R n×r . By noting that M can be expressed as a linear combination of r rank-one matrices. The main task of greedy techniques is to investigate the atom set Once the atom set A M is found, the singular values σ i (M) = σ i can be computed easily by solving the following problem One popular greedy technique is atomic decomposition for minimum rank approximation (ADMiRA) BIB004 , which can be viewed as an extension of the compressive sampling matching pursuit (CoSaMP) algorithm in CS BIB003 - BIB006 . ADMiRA employs a strategy of adding as well as pruning to identify the atom set A M . In the addition stage, ADMiRA identifies 2r rank-one matrices representing a residual best and then adds the matrices to the pre-chosen atom set. Specifically, if X i−1 is the output matrix generated in the (i − 1)-th iteration and A i−1 is its atom set, then ADMiRA computes the residual R i = P (M) − P (X i−1 ) and then adds 2r leading principal components of R i to A i−1 . In other words, the enlarged atom set i is given by where u R i ,j and v R i ,j are the j-th principal left and right singular vectors of R i , respectively. Note that i contains at most 3r elements. In the pruning stage, ADMiRA refines i into a set of r atoms. To be specific, if X i is the best rank-3r approximation of M, i.e., 7 then the refined atom set A i is expressed as where u X i ,j and v X i ,j are the j-th principal left and right singular vectors of X i , respectively. The computational complexity of ADMiRA is mainly due to two operations: the least BIB005 Note that the solution to (29) can be computed in a similar way as in (27). squares operation in BIB001 and the SVD-based operation to find out the leading atoms of the required matrix (e.g., R k and X k+1 ). First, since BIB001 involves the pseudo-inverse of Second, the computational cost of performing a truncated SVD of O(r) atoms is O(rn 1 n 2 ). Since | | < n 1 n 2 , the computational complexity of ADMiRA per iteration is O(rn 1 n 2 ). Also, the iteration number of ADMiRA to achieve the -approximation is O(log( 1 )) BIB004 . In Table 3 , we summarize the ADMiRA algorithm. Yet another well-known greedy method is the rank-one matrix pursuit algorithm BIB007 , an extension of the orthogonal matching pursuit algorithm in CS BIB002 . In this approach, instead of choosing multiple atoms of a matrix, an atom corresponding to the largest singular value of the residual matrix R k is chosen.
|
Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> A primal-dual infeasible-interior-point path-following algorithm is proposed for solving semidefinite programming (SDP) problems. If the problem has a solution, then the algorithm is globally convergent. If the starting point is feasible or close to being feasible, the algorithm finds an optimal solution in at most $O(\sqrt{n}L)$ iterations, where n is the size of the problem and L is the logarithm of the ratio of the initial error and the tolerance. If the starting point is large enough, then the algorithm terminates in at most O(nL) steps either by finding a solution or by determining that the primal-dual problem has no solution of norm less than a given number. Moreover, we propose a sufficient condition for the superlinear convergence of the algorithm. In addition, we give two special cases of SDP for which the algorithm is quadratically convergent. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> This work concerns primal--dual interior-point methods for semidefinite programming (SDP) that use a search direction originally proposed by Helmberg et al. [SIAM J. Optim., 6 (1996), pp. 342--361] and Kojima, Shindoh, and Hara [SIAM J. Optim., 7 (1997), pp. 86--125.] and recently rediscovered by Monteiro [SIAM J. Optim., 7 (1997), pp. 663--678] in a more explicit form. In analyzing these methods, a number of basic equalities and inequalities were developed in [Kojima, Shindoh, and Hara] and also in [Monteiro] through different means and in different forms. ::: In this paper, we give a concise derivation of the key equalities and inequalities for complexity analysis along the exact line used in linear programming (LP), producing basic relationships that have compact forms almost identical to their counterparts in LP. We also introduce a new formulation of the central path and variable-metric measures of centrality. These results provide convenient tools for deriving polynomiality results for primal--dual algorithms extended from LP to SDP using the aforementioned and related search directions. We present examples of such extensions, including the long-step infeasible-interior-point algorithm of Zhang [SIAM J. Optim., 4 (1994), pp. 208--227]. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices (X^k, Y^k) and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates X^k is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. We provide numerical examples in which 1,000 by 1,000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with linearized Bregman iterations for l1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> Algorithms to construct/recover low-rank matrices satisfying a set of linear equality constraints have important applications in many signal processing contexts. Recently, theoretical guarantees for minimum-rank matrix recovery have been proven for nuclear norm minimization (NNM), which can be solved using standard convex optimization approaches. While nuclear norm minimization is effective, it can be computationally demanding. In this work, we explore the use of the PowerFactorization (PF) algorithm as a tool for rank-constrained matrix recovery. Empirical results indicate that incremented-rank PF is significantly more successful than NNM at recovering low-rank matrices, in addition to being faster. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> We address the inverse problem that arises in compressed sensing of a low-rank matrix. Our approach is to pose the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition that provides an analogy between parsimonious representations of a sparse vector and a low-rank matrix. Efficient greedy algorithms to solve the inverse problem for the vector case are extended to the matrix case through this atomic decomposition. In particular, we propose an efficient and guaranteed algorithm named ADMiRA that extends CoSaMP, its analogue for the vector case. The performance guarantee is given in terms of the rank-restricted isometry property and bounds both the number of iterations and the error in the approximate solution for the general case where the solution is approximately low-rank and the measurements are noisy. With a sparse measurement operator such as the one arising in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. The numerical experiments for the matrix completion problem show that, although the measurement operator in this case does not satisfy the rank-restricted isometry property, ADMiRA is a competitive algorithm for matrix completion. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> This book describes and analyzes all available alternating projection methods for solving the general problem of finding a point in the intersection of several given sets belonging to a Hilbert space. For each method the authors describe and analyze convergence, speed of convergence, acceleration techniques, stopping criteria, and applications. Different types of algorithms and applications are studied for subspaces, linear varieties, and general convex sets. The authors also unify these algorithms into a common theoretical framework. Alternating Projection Methods is a comprehensive and accessible source of information, providing readers with the theoretical and practical aspects of the most relevant alternating projection methods. It features several acceleration techniques for every method it presents and analyzes, including schemes that cannot be found in other books. It also provides full descriptions of several important mathematical problems and specific applications for which the alternating projection methods represent an efficient option. Examples and problems that illustrate this material are also included. Audience: This book can be used as a textbook for advanced undergraduate or first-year graduate students. Because it is comprehensive, it can also be used as a tutorial or a reference by mathematicians and nonmathematicians from many fields of application who need to solve alternating projection problems in their work. Contents: Preface; Chapter 1: Introduction; Chapter 2: Overview on Spaces; Chapter 3: The MAP on Subspaces; Chapter 4: Row-Action Methods; Chapter 5: Projecting on Convex Sets; Chapter 6: Applications of MAP for Matrix Problems; Bibliography; Author Index; Subject Index. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) ALTERNATING MINIMIZATION TECHNIQUES <s> Matrix completion involves recovering a matrix from a subset of its entries by utilizing interdependency between the entries, typically through low rank structure. Despite matrix completion requiring the global solution of a non-convex objective, there are many computationally efficient algorithms which are effective for a broad class of matrices. In this paper, we introduce an alternating steepest descent algorithm (ASD) and a scaled variant, ScaledASD, for the fixed-rank matrix completion problem. Empirical evaluation of ASD and ScaledASD on both image inpainting and random problems show they are competitive with other state-of-the-art matrix completion algorithms in terms of recoverable rank and overall computational time. In particular, their low per iteration computational complexity makes ASD and ScaledASD efficient for large size problems, especially when computing the solutions to moderate accuracy such as in the presence of model misfit, noise, and/or as an initialization strategy for higher order methods. A preliminary convergence analysis is also presented. <s> BIB007
|
Many of LRMC algorithms BIB003 , BIB005 require the computation of (partial) SVD to obtain the singular values and vectors (expressed as O(rn 2 )). As an effort to further reduce the computational burden of SVD, alternating minimization techniques have been proposed BIB004 - . The basic premise behind this approach is that a low-rank matrix M ∈ R n 1 ×n 2 of rank r can be factorized into tall and fat matrices, i.e., M = XY where X ∈ R n 1 ×r and Y ∈ R r×n 2 (r n 1 , n 2 ). The key idea of this approach is to find out X and Y minimizing the residual (the difference between the original matrix and the estimate of it) on the sampling space. In other words, X and Y are recovered by solving min X,Y Power factorization, a simple alternating minimization algorithm, finds out the solution to (31) by updating X and Y alternately as BIB004 X i+1 = arg min Alternating steepest descent (ASD) is another alternating method to find out the solution BIB007 . The key idea of ASD is to update X and Y by applying the steepest gradient descent method to the objective function f (X, BIB001 . Specifically, ASD first computes the gradient of f (X, Y) with respect to X and then updates X along the steepest gradient descent direction: where the gradient descent direction f Y i (X i ) and stepsize t x i are given by After updating X, ASD updates Y in a similar way: where The low-rank matrix fitting (LMaFit) algorithm finds out the solution in a different way by solving arg min With the arbitrary input of X 0 ∈ R n 1 ×r and Y 0 ∈ R r×n 2 and Z 0 = P (M), the variables X, Y, and Z are updated in the i-th iteration as where X † is Moore-Penrose pseudoinverse of matrix X. Running time of the alternating minimization algorithms is very short due to the following reasons: 1) the SVD computation is unnecessary and 2) the size of matrices to be inverted is smaller than the size of matrices for the greedy algorithms. While the inversion of huge size matrices (size of | | × O(1)) is required in a greedy algorithms (see BIB002 ), alternating minimization only requires the pseudo inversion of X and Y (size of n 1 × r and r × n 2 , respectively). Indeed, the computational complexity of this approach is O(r| | + r 2 n 1 + r 2 n 2 ), which is much smaller than that of SVT and ADMiRA when r min(n 1 , n 2 ). Also, the iteration number of ASD and LMaFit to achieve the -approximation is O(log( 1 )) BIB007 , . It has been shown that alternating minimization techniques are simple to implement and also require small sized memory BIB006 . Major drawback of these approaches is that it might converge to the local optimum.
|
Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> Multidimensional scaling can be considered as involving three basic steps. In the first step, a scale of comparative distances between all pairs of stimuli is obtained. This scale is analogous to the scale of stimuli obtained in the traditional paired comparisons methods. In this scale, however, instead of locating each stimulus-object on a given continuum, the distances between each pair of stimuli are located on a distance continuum. As in paired comparisons, the procedures for obtaining a scale of comparative distances leave the true zero point undetermined. Hence, a comparative distance is not a distance in the usual sense of the term, but is a distance minus an unknown constant. The second step involves estimating this unknown constant. When the unknown constant is obtained, the comparative distances can be converted into absolute distances. In the third step, the dimensionality of the psychological space necessary to account for these absolute distances is determined, and the projections of stimuli on axes of this space are obtained. A set of analytical procedures was developed for each of the three steps given above, including a least-squares solution for obtaining comparative distances by the complete method of triads, two practical methods for estimating the additive constant, and an extension of Young and Householder's Euclidean model to include procedures for obtaining the projections of stimuli on axes from fallible absolute distances. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> Contents: Matrix Eigenvalue Methods.- Double Bracket Isospectral Flows.- Singular Value Decomposition.- Linear Programming.- Approximation and Control.- Balanced Matrix Factorizations.- Invariant Theory and System Balancing.- Balancing via Gradient Flows.- Sensitivity Optimization.- Linear Algebra.- Dynamical Systems.- Global Analysis. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> An available pressurized air source, such as an inflated tire, connectable by appropriate conduit means, with flow control and pressure regulation provisions, through an air transmitter or face mask, to the breathing passages of a passenger in a submerged land vehicle to either provide emergency breathing air for the passenger, or to fill an inflatable and portable air pack which the passenger may leave the vehicle with, or both. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> A new algorithm, termed subspace evolution and transfer (SET), is proposed for solving the consistent matrix completion problem. In this setting, one is given a subset of the entries of a low-rank matrix, and asked to find one low-rank matrix consistent with the given observations. We show that this problem can be solved by searching for a column space that matches the observations. The corresponding algorithm consists of two parts — subspace evolution and subspace transfer. In the evolution part, we use a line search procedure to refine the column space. However, line search is not guaranteed to converge, as there may exist barriers along the search path that prevent the algorithm from reaching a global optimum. To address this problem, in the transfer part, we design mechanisms to detect barriers and transfer the estimated column space from one side of the barrier to the another. The SET algorithm exhibits excellent empirical performance for very low-rank matrices. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> Motivated by the problem of learning a linear regression model whose parameter is a large fixed-rank non-symmetric matrix, we consider the optimization of a smooth cost function defined on the set of fixed-rank matrices. We adopt the geometric framework of optimization on Riemannian quotient manifolds. We study the underlying geometries of several well-known fixed-rank matrix factorizations and then exploit the Riemannian quotient geometry of the search space in the design of a class of gradient descent and trust-region algorithms. The proposed algorithms generalize our previous results on fixed-rank symmetric positive semidefinite matrices, apply to a broad range of applications, scale to high-dimensional problems and confer a geometric basis to recent contributions on the learning of fixed-rank non-symmetric matrices. We make connections with existing algorithms in the context of low-rank matrix completion and discuss relative usefulness of the proposed framework. Numerical experiments suggest that the proposed algorithms compete with the state-of-the-art and that manifold optimization offers an effective and versatile framework for the design of machine learning algorithms that learn a fixed-rank matrix. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) OPTIMIZATION OVER SMOOTH RIEMANNIAN MANIFOLD <s> The matrix completion problem consists of finding or approximating a low-rank matrix based on a few samples of this matrix. We propose a new algorithm for matrix completion that minimizes the least-square distance on the sampling set over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retraction-based optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this low-rank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach... <s> BIB007
|
In many applications where the rank of a matrix is known in a priori (i.e., rank(M) = r), one can strengthen the constraint of (25) by defining the feasible set, denoted by F, as Note that F is not a vector space BIB001 and thus conventional optimization techniques cannot be used to solve the problem defined over F. While this is bad news, a remedy for this is that F is a smooth Riemannian manifold BIB006 , BIB002 . Roughly speaking, smooth manifold is a generalization of R n 1 ×n 2 on which a notion of differentiability exists. For more rigorous definition, see, e.g., BIB004 , . A smooth manifold equipped with an inner product, often called a Riemannian metric, forms a smooth Riemannian manifold. Since the smooth Riemannian manifold is a differentiable structure equipped with an inner product, one can use all necessary ingredients to solve the optimization problem with quadratic cost function, such as Riemannian gradient, Hessian matrix, exponential map, and parallel translation BIB004 . Therefore, optimization techniques in R n 1 ×n 2 (e.g., steepest descent, Newton method, conjugate gradient method) can be used to solve BIB003 in the smooth Riemannian manifold F. In recent years, many efforts have been made to solve the matrix completion over smooth Riemannian manifolds. These works are classified by their specific choice of Riemannian manifold structure. One well-known approach is to solve (25) over the Grassmann manifold of orthogonal matrices 9 BIB005 . In this approach, a feasible set can be expressed as F = {QR T : Q T Q = I, Q ∈ R n 1 ×r , R ∈ R n 2 ×r } and thus solving BIB003 is to find an n 1 × r orthonormal matrix Q satisfying In BIB005 , an approach to solve (39) over the Grassmann manifold has been proposed. Recently, it has been shown that the original matrix can be reconstructed by the unconstrained optimization over the smooth Riemannian manifold F BIB007 . Often, F is expressed using the singular value decomposition as The FNM problem BIB003 can then be reformulated as an unconstrained optimization over F: One can easily obtain the closed-form expression of the ingredients such as tangent spaces, Riemannian metric, Riemannian gradient, and Hessian matrix in the unconstrained optimization BIB002 , BIB004 , . In fact, major benefits of the Riemannian optimization-based LRMC techniques are the simplicity in implementation and the fast convergence. Similar to ASD, the computational complexity per iteration of these techniques is O(r| | + r 2 n 1 + r 2 n 2 ), and they require O(log( 1 )) iterations to achieve the -approximation solution BIB007 .
|
Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> The alternating direction method is one of the attractive approaches for solving linearly constrained separate monotone variational inequalities. Experience on applications has shown that the number of iterations depends significantly on the penalty parameter for the system of linear constraint equations. While the penalty parameter is a constant in the original method, in this paper we present a modified alternating direction method that adjusts the penalty parameter per iteration based on the iterate message. Preliminary numerical tests show that the self-adaptive adjustment technique is effective in practice. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> Knowledge of accurate and timely channel state information (CSI) at the transmitter is becoming increasingly important in wireless communication systems. While it is often assumed that the receiver (whether base station or mobile) needs to know the channel for accurate power control, scheduling, and data demodulation, it is now known that the transmitter (especially the base station) can also benefit greatly from this information. For example, recent results in multiantenna multiuser systems show that large throughput gains are possible when the base station uses multiple antennas and a known channel to transmit distinct messages simultaneously and selectively to many single-antenna users. In time-division duplex systems, where the base station and mobiles share the same frequency band for transmission, the base station can exploit reciprocity to obtain the forward channel from pilots received over the reverse channel. Frequency-division duplex systems are more difficult because the base station transmits and receives on different frequencies and therefore cannot use the received pilot to infer anything about the multiantenna transmit channel. Nevertheless, we show that the time occupied in frequency-duplex CSI transfer is generally less than one might expect and falls as the number of antennas increases. Thus, although the total amount of channel information increases with the number of antennas at the base station, the burden of learning this information at the base station paradoxically decreases. Thus, the advantages of having more antennas at the base station extend from having network gains to learning the channel information. We quantify our gains using linear analog modulation which avoids digitizing and coding the CSI and therefore can convey information very rapidly and can be readily analyzed. The old paradigm that it is not worth the effort to learn channel information at the transmitter should be revisited since the effort decreases and the gain increases with the number of antennas. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> Many problems can be characterized by the task of recovering the low-rank and sparse components of a given matrix. Recently, it was discovered that this nondeterministic polynomial-time hard (NP-hard) task can be well accomplished, both theoretically and numerically, via heuristically solving a convex relaxation problem where the widely acknowledged nuclear norm and $l_1$ norm are utilized to induce low-rank and sparsity. This paper studies the recovery task in the general settings that only a fraction of entries of the matrix can be observed and the observation is corrupted by both impulsive and Gaussian noise. We show that the resulting model falls into the applicable scope of the classical augmented Lagrangian method. Moreover, the separable structure of the new model enables us to solve the involved subproblems more efficiently by splitting the augmented Lagrangian function. Hence, some splitting numerical algorithms are developed for solving the new recovery model. Some preliminary numerical experiments verify that these augmented-Lagrangian-based splitting algorithms are easily implementable and surprisingly efficient for tackling the new recovery model. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> Many machine learning and signal processing problems can be formulated as linearly constrained convex programs, which could be efficiently solved by the alternating direction method (ADM). However, usually the subproblems in ADM are easily solvable only when the linear mappings in the constraints are identities. To address this issue, we propose a linearized ADM (LADM) method by linearizing the quadratic penalty term and adding a proximal term when solving the sub-problems. For fast convergence, we also allow the penalty to change adaptively according a novel update rule. We prove the global convergence of LADM with adaptive penalty (LADMAP). As an example, we apply LADMAP to solve low-rank representation (LRR), which is an important subspace clustering technique yet suffers from high computation cost. By combining LADMAP with a skinny SVD representation technique, we are able to reduce the complexity O(n3) of the original ADM based method to O(rn2), where r and n are the rank and size of the representation matrix, respectively, hence making LRR possible for large scale applications. Numerical experiments verify that for LRR our LADMAP based methods are much faster than state-of-the-art algorithms. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) TRUNCATED NNM <s> We propose a DC (Difference of two Convex functions) formulation approach for sparse optimization problems having a cardinality or rank constraint. With the largest-k norm, an exact DC representation of the cardinality constraint is provided. We then transform the cardinality-constrained problem into a penalty function form and derive exact penalty parameter values for some optimization problems, especially for quadratic minimization problems which often appear in practice. A DC Algorithm (DCA) is presented, where the dual step at each iteration can be efficiently carried out due to the accessible subgradient of the largest-k norm. Furthermore, we can solve each DCA subproblem in linear time via a soft thresholding operation if there are no additional constraints. The framework is extended to the rank-constrained problem as well as the cardinality- and the rank-minimization problems. Numerical experiments demonstrate the efficiency of the proposed DCA in comparison with existing methods which have other penalty terms. <s> BIB007
|
Truncated NNM is a variation of the NNM-based technique requiring the rank information r. BIB002 While the NNM technique takes into account all the singular values of a desired matrix, truncated NNM considers only the n − r smallest singular values BIB006 . Specifically, truncated NNM finds a solution to where we have and thus the problem (42) can be reformulated to min X X * − max This problem can be solved in an iterative way. Specifically, starting from X 0 = P (M), truncated NNM updates X i by solving BIB006 min where U i−1 , V i−1 ∈ R n×r are the matrices of left and right-singular vectors of X i−1 , respectively. We note that an approach in (46) has two main advantages: 1) the rank information of the desired matrix can be incorporated and 2) various gradient-based techniques including alternating direction method of multipliers (ADMM) BIB004 , BIB005 , ADMM with an adaptive penalty (ADMMAP) BIB001 , and accelerated proximal gradient line search method (APGL) BIB003 can be employed. Note also that the dominant operation is the truncated SVD operation and its complexity is O(rn 1 n 2 ), which is much smaller than that of the NNM technique (see Table 5 ) as long BIB002 Although truncated NNM is a variant of NNM, we put it into the second category since it exploits the rank information of a low-rank matrix. as r min(n 1 , n 2 ). Similar to SVT, the iteration complexity of the truncated NNM to achieve the -approximation is O( 1 √ ) BIB006 . Alternatively, the difference of two convex functions (DC) based algorithm can be used to solve (45) BIB007 . In Table 4 , we summarize the truncated NNM algorithm.
|
Low-Rank Matrix Completion: A Contemporary Survey <s> 1) SPARSITY OF OBSERVED ENTRIES <s> We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? ::: ::: We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) SPARSITY OF OBSERVED ENTRIES <s> This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candes and Recht, Candes and Tao, and Keshavan, Montanari, and Oh. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) SPARSITY OF OBSERVED ENTRIES <s> Motivated by the problem of learning a linear regression model whose parameter is a large fixed-rank non-symmetric matrix, we consider the optimization of a smooth cost function defined on the set of fixed-rank matrices. We adopt the geometric framework of optimization on Riemannian quotient manifolds. We study the underlying geometries of several well-known fixed-rank matrix factorizations and then exploit the Riemannian quotient geometry of the search space in the design of a class of gradient descent and trust-region algorithms. The proposed algorithms generalize our previous results on fixed-rank symmetric positive semidefinite matrices, apply to a broad range of applications, scale to high-dimensional problems and confer a geometric basis to recent contributions on the learning of fixed-rank non-symmetric matrices. We make connections with existing algorithms in the context of low-rank matrix completion and discuss relative usefulness of the proposed framework. Numerical experiments suggest that the proposed algorithms compete with the state-of-the-art and that manifold optimization offers an effective and versatile framework for the design of machine learning algorithms that learn a fixed-rank matrix. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) SPARSITY OF OBSERVED ENTRIES <s> Presents a list of articles published by the IEEE Signal Processing Society (SPS) that ranked among the top 100 most downloaded IEEE Xplore articles. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) SPARSITY OF OBSERVED ENTRIES <s> The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 1) SPARSITY OF OBSERVED ENTRIES <s> Channel state information at the transmitter (CSIT) is essential for frequency-division duplexing (FDD) massive MIMO systems, but conventional solutions involve overwhelming overhead both for downlink channel training and uplink channel feedback. In this letter, we propose a joint CSIT acquisition scheme to reduce the overhead. Particularly, unlike conventional schemes where each user individually estimates its own channel and then feed it back to the base station (BS), we propose that all scheduled users directly feed back the pilot observation to the BS, and then joint CSIT recovery can be realized at the BS. We further formulate the joint CSIT recovery problem as a low-rank matrix completion problem by utilizing the low-rank property of the massive MIMO channel matrix, which is caused by the correlation among users. Finally, we propose a hybrid low-rank matrix completion algorithm based on the singular value projection to solve this problem. Simulations demonstrate that the proposed scheme can provide accurate CSIT with lower overhead than conventional schemes. <s> BIB006
|
Sparsity expresses an idea that when a matrix has a low rank property, then it can be recovered using only a small number of observed entries. Natural question arising from this is how many elements do we need to observe for the accurate recovery of the matrix. In order to answer this question, we need to know a notion of a degree of freedom (DOF). The DOF of a matrix is the number of freely chosen variables in the matrix. One can easily see that the DOF of the rank one matrix in (1) is 3 since one entry can be determined after observing three. As an another example, consider the following rank one matrix One can easily see that if we observe all entries of one column and one row, then the rest can be determined by a simple VOLUME 7, 2019 linear relationship between these since M is the rank-one matrix. Specifically, if we observe the first row and the first column, then the first and the second columns differ by the factor of three so that as long as we know one entry in the second column, rest will be recovered. Thus, the DOF of M is 4 + 4 − 1 = 7. Following lemma generalizes our observations. Lemma 1: The DOF of a square n×n matrix with rank r is 2nr − r 2 . Also, the DOF of n 1 × n 2 -matrix is (n 1 + n 2 )r − r 2 . Proof: Since the rank of a matrix is r, we can freely choose values for all entries of the r columns, resulting in nr degrees of freedom for the first r column. Once r independent columns, say m 1 , · · · m r , are constructed, then each of the rest n − r columns is expressed as a linear combinations of the first r columns (e.g., m r+1 = α 1 m 1 + · · · + α r m r ) so that r linear coefficients (α 1 , · · · α r ) can be freely chosen in these columns. By adding nr and (n − r)r, we obtain the desired result. Generalization to n 1 × n 2 matrix is straightforward. This lemma says that if n is large and r is small enough (e.g., r = O(1)), essential information in a matrix is just in the order of n, DOF = O(n), which is clearly much smaller than the total number of entries of the matrix. Interestingly, the DOF is the minimum number of observed entries required for the recovery of a matrix. If this condition is violated, that is, if the number of observed entries is less than the DOF (i.e., m < 2nr − r 2 ), no algorithm whatsoever can recover the matrix. In Fig. 5 , we illustrate how to recover the matrix when the number of observed entries equals the DOF. In this figure, we assume that blue colored entries are observed. BIB004 In a nutshell, unknown entries of the matrix are found in two-step process. First, we identify the linear relationship between the BIB004 Since we observe the first r rows and columns, we have 2nr − r 2 observations in total. first r columns and the rest. For example, the (r + 1)-th column can be expressed as a linear combination of the first r columns. That is, Since the first r entries of m 1 , · · · m r+1 are observed (see Fig. 5(a) ), we have r unknowns (α 1 , · · · , α r ) and r equations so that we can identify the linear coefficients α 1 , · · · α r with the computational cost O(r 3 ) of an r × r matrix inversion. Once these coefficients are identified, we can recover the unknown entries m r+1,r+1 · · · m r+1,n of m r+1 using the linear relationship in (48) (see Fig. 5(b) ). By repeating this step for the rest of columns, we can identify all unknown entries with O(rn 2 ) computational complexity. BIB006 Now, an astute reader might notice that this strategy will not work if one entry of the column (or row) is unobserved. As illustrated in Fig. 6 , if only one entry in the r-th row, say (r, l)-th entry, is unobserved, then one cannot recover the l-th column simply because the matrix in Fig. 6 cannot be converted to the matrix form in Fig. 5(b) . It is clear from this discussion that the measurement size being equal to the DOF is not enough for the most cases and in fact it is just a necessary condition for the accurate recovery of the rank-r matrix. This seems like a depressing news. However, DOF is in any case important since it is a fundamental limit (lower bound) of the number of observed entries to ensure the exact recovery of the matrix. Recent results show that the BIB006 For each unknown entry, it needs r multiplication and r − 1 addition operations. Since the number of unknown entries is (n − r) 2 , the computational cost is (2r − 1)(n − r) 2 . Recall that O(r 3 ) is the cost of computing (α 1 , · · · , α r ) in BIB003 . Thus, the total cost is O(r 3 + (2r − 1)(n − r) 2 ) = O(rn 2 ). DOF is not much different from the number of measurements ensuring the recovery of the matrix BIB001 , BIB002 . BIB005
|
Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candes and Recht, Candes and Tao, and Keshavan, Montanari, and Oh. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? ::: ::: We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> A new algorithm, termed subspace evolution and transfer (SET), is proposed for solving the consistent matrix completion problem. In this setting, one is given a subset of the entries of a low-rank matrix, and asked to find one low-rank matrix consistent with the given observations. We show that this problem can be solved by searching for a column space that matches the observations. The corresponding algorithm consists of two parts — subspace evolution and subspace transfer. In the evolution part, we use a line search procedure to refine the column space. However, line search is not guaranteed to converge, as there may exist barriers along the search path that prevent the algorithm from reaching a global optimum. To address this problem, in the transfer part, we design mechanisms to detect barriers and transfer the estimated column space from one side of the barrier to the another. The SET algorithm exhibits excellent empirical performance for very low-rank matrices. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible; but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. ::: This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n x n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nr polylog(n). <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> The matrix completion problem consists of finding or approximating a low-rank matrix based on a few samples of this matrix. We propose a new algorithm for matrix completion that minimizes the least-square distance on the sampling set over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retraction-based optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this low-rank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach... <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) COHERENCE <s> Channel state information at the transmitter (CSIT) is essential for frequency-division duplexing (FDD) massive MIMO systems, but conventional solutions involve overwhelming overhead both for downlink channel training and uplink channel feedback. In this letter, we propose a joint CSIT acquisition scheme to reduce the overhead. Particularly, unlike conventional schemes where each user individually estimates its own channel and then feed it back to the base station (BS), we propose that all scheduled users directly feed back the pilot observation to the BS, and then joint CSIT recovery can be realized at the BS. We further formulate the joint CSIT recovery problem as a low-rank matrix completion problem by utilizing the low-rank property of the massive MIMO channel matrix, which is caused by the correlation among users. Finally, we propose a hybrid low-rank matrix completion algorithm based on the singular value projection to solve this problem. Simulations demonstrate that the proposed scheme can provide accurate CSIT with lower overhead than conventional schemes. <s> BIB007
|
If nonzero elements of a matrix are concentrated in a certain region, we generally need a large number of observations to recover the matrix. On the other hand, if the matrix is spread out widely, then the matrix can be recovered with a relatively small number of entries. For example, consider the following two rank-one matrices in R n×n The matrix M 1 has only four nonzero entries at the top-left corner. Suppose n is large, say n = 1000, and all entries but the four elements in the top-left corner are observed (99.99% of entries are known). In this case, even though the rank of a matrix is just one, there is no way to recover this matrix since the information bearing entries is missing. This tells us that although the rank of a matrix is very small, one might not recover it if nonzero entries of the matrix are concentrated in a certain area. In contrast to the matrix M 1 , one can accurately recover the matrix M 2 with only 2n − 1 (= DOF) known entries. In other words, one row and one column are enough to recover M 2 ). BIB005 In BIB001 , it has been shown that the required number of entries to recover the matrix using the nuclear-norm minimization is in the order of n 1.2 when the rank is O(1). One can deduce from this example that the spread of observed entries is important for the identification of unknown entries. In order to quantify this, we need to measure the concentration of a matrix. Since the matrix has two-dimensional structure, we need to check the concentration in both row and column directions. This can be done by checking the concentration in the left and right singular vectors. Recall that the SVD of a matrix is where U = [u 1 · · · u r ] and V = [v 1 · · · v r ] are the matrices constructed by the left and right singular vectors, respectively, and is the diagonal matrix whose diagonal entries are σ i . From BIB003 , we see that the concentration on the vertical direction (concentration in the row) is determined by u i and that on the horizontal direction (concentration in the column) is determined by v i . For example, if one of the standard basis vector e i , say e 1 = [1 0 · · · 0] T , lies on the space spanned by u 1 , · · · u r while others (e 2 , e 3 , · · · ) are orthogonal to this space, then it is clear that nonzero entries of the matrix are only on the first row. In this case, clearly one cannot infer the entries of the first row from the sampling of the other row. That is, it is not possible to recover the matrix without observing the entire entries of the first row. The coherence, a measure of concentration in a matrix, is formally defined as BIB001 µ(U) = n r max 1≤i≤n P U e i 2 BIB006 where e i is standard basis and P U is the projection onto the range space of U. Since the columns of U = [u 1 · · · u r ] are orthonormal, we have where the first equality is due to the idempotency of P U (i.e., P T U P U = P U ) and the last equality is because n i=1 |u ij | 2 = 1. Coherence is maximized when the nonzero entries of a matrix are concentrated in a row (or column). For example, consider the matrix whose nonzero entries are concentrated on the first row Then, U = [1 0 0] T , and thus P U e 1 2 = 1 and P U e 2 2 = P U e 3 2 = 0. As shown in Fig. 7(a) , the standard basis e 1 lies on the space spanned by U while others are orthogonal to this space so that the maximum coherence is achieved (max i P U e i . In this case, as illustrated in Fig. 7(b) , P U e i 2 is the same for all standard basis vector e i , achieving lower bound in (51) and the minimum coherence (max i P U e i 2 2 = 1 3 and µ(U) = 1). As discussed in BIB007 , the number of measurements to recover the low-rank matrix is proportional to the coherence of the matrix BIB002 , BIB004 , BIB001 .
|
Low-Rank Matrix Completion: A Contemporary Survey <s> B. WORKING WITH DIFFERENT TYPES OF LOW-RANK MATRICES <s> This paper deals with the Riemannian geometry of the set of symmetric positive semidefinite matrices of fixed rank. This set is studied as an embedded submanifold of the real matrices equipped with the usual Euclidean metric. With this structure, we derive expressions of the tangent space and geodesics of the manifold, suitable for efficient numerical computations. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. WORKING WITH DIFFERENT TYPES OF LOW-RANK MATRICES <s> Matrix completion models are among the most common formulations of recommender systems. Recent works have showed a boost of performance of these techniques when introducing the pairwise relationships between users/items in the form of graphs, and imposing smoothness priors on these graphs. However, such techniques do not fully exploit the local stationarity structures of user/item graphs, and the number of parameters to learn is linear w.r.t. the number of users and items. We propose a novel approach to overcome these limitations by using geometric deep learning on graphs. Our matrix completion architecture combines graph convolutional neural networks and recurrent neural networks to learn meaningful statistical graph-structured patterns and the non-linear diffusion process that generates the known ratings. This neural network system requires a constant number of parameters independent of the matrix size. We apply our method on both synthetic and real datasets, showing that it outperforms state-of-the-art techniques. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> B. WORKING WITH DIFFERENT TYPES OF LOW-RANK MATRICES <s> Location awareness, providing the ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of the key ingredients for the Internet of Things (IoT) era. In order to make a proper reaction to the collected information from things , location information of things should be available at the data center. One challenge for the IoT networks is to identify the location map of whole nodes from partially observed distance information. The aim of this paper is to present an algorithm to recover the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in a Riemannian manifold in which a notion of differentiability can be defined, we solve the low-rank matrix completion problem using a modified conjugate gradient algorithm. From the convergence analysis, we show that localization in Riemannian manifold using conjugate gradient (LRM-CG) converges linearly to the original Euclidean distance matrix under the extended Wolfe’s conditions. From the numerical experiments, we demonstrate that the proposed method, called LRM-CG, is effective in recovering the Euclidean distance matrix. <s> BIB003
|
In many practical situations where the matrix has a certain structure, we want to make the most of the given structure to maximize profits in terms of performance and computational complexity. In this subsection, we discuss several cases including LRMC of the PSD matrix BIB001 , Euclidean distance matrix BIB003 , and recommendation matrix BIB002 and describe how the special structure can be exploited in the algorithm design.
|
Low-Rank Matrix Completion: A Contemporary Survey <s> 2) EUCLIDEAN DISTANCE MATRIX COMPLETION <s> Contents: Matrix Eigenvalue Methods.- Double Bracket Isospectral Flows.- Singular Value Decomposition.- Linear Programming.- Approximation and Control.- Balanced Matrix Factorizations.- Invariant Theory and System Balancing.- Balancing via Gradient Flows.- Sensitivity Optimization.- Linear Algebra.- Dynamical Systems.- Global Analysis. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) EUCLIDEAN DISTANCE MATRIX COMPLETION <s> Optimization is the science of making a best choice in the face of conflicting requirements. Any convex optimization problem has geometric interpretation. If a given optimization problem can be transformed to a convex equivalent, then this interpretive benefit is acquired. That is a powerful attraction: the ability to visualize geometry of an optimization problem. Conversely, recent advances in geometry hold convex optimization within their proofs' core. This book is about convex optimization, convex geometry (with particular attention to distance geometry), geometrical problems, and problems that can be transformed into geometrical problems. Euclidean distance geometry is, fundamentally, a determination of point conformation from interpoint distance information; e.g., given only distance information, determine whether there corresponds a realizable configuration of points; a list of points in some dimension that attains the given interpoint distances. large black & white paperback <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) EUCLIDEAN DISTANCE MATRIX COMPLETION <s> Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra. Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) EUCLIDEAN DISTANCE MATRIX COMPLETION <s> This paper deals with the Riemannian geometry of the set of symmetric positive semidefinite matrices of fixed rank. This set is studied as an embedded submanifold of the real matrices equipped with the usual Euclidean metric. With this structure, we derive expressions of the tangent space and geodesics of the manifold, suitable for efficient numerical computations. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) EUCLIDEAN DISTANCE MATRIX COMPLETION <s> Location awareness, providing ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of key ingredients for internet of things (IoT). In order to make a proper reaction to the collected information from devices, location information of things should be available at the data center. One challenge for the massive IoT networks is to identify the location map of whole sensor nodes from partially observed distance information. This is especially important for massive sensor networks, relay-based and hierarchical networks, and vehicular to everything (V2X) networks. The primary goal of this paper is to propose an algorithm to reconstruct the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in Riemannian manifold in which a notion of differentiability can be defined, we are able to solve the low-rank matrix completion problem efficiently using a modified conjugate gradient algorithm. From the analysis and numerical experiments, we show that the proposed method, termed localization in Riemannian manifold using conjugate gradient (LRM-CG), is effective in recovering the Euclidean distance matrix for both noiseless and noisy environments. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 2) EUCLIDEAN DISTANCE MATRIX COMPLETION <s> Location awareness, providing the ability to identify the location of sensor, machine, vehicle, and wearable device, is a rapidly growing trend of hyper-connected society and one of the key ingredients for the Internet of Things (IoT) era. In order to make a proper reaction to the collected information from things , location information of things should be available at the data center. One challenge for the IoT networks is to identify the location map of whole nodes from partially observed distance information. The aim of this paper is to present an algorithm to recover the Euclidean distance matrix (and eventually the location map) from partially observed distance information. By casting the low-rank matrix completion problem into the unconstrained minimization problem in a Riemannian manifold in which a notion of differentiability can be defined, we solve the low-rank matrix completion problem using a modified conjugate gradient algorithm. From the convergence analysis, we show that localization in Riemannian manifold using conjugate gradient (LRM-CG) converges linearly to the original Euclidean distance matrix under the extended Wolfe’s conditions. From the numerical experiments, we demonstrate that the proposed method, called LRM-CG, is effective in recovering the Euclidean distance matrix. <s> BIB006
|
Low-rank Euclidean distance matrix completion arises in the localization problem (e.g., sensor node localization in IoT networks). Let {z i } n i=1 be sensor locations in the k-dimensional Euclidean space (k = 2 or k = 3). Then, the Euclidean distance matrix M = (m ij ) ∈ R n×n of sensor nodes is defined as m ij = z i − z j 2 2 . It is obvious that M is symmetric with diagonal elements being zero (i.e., m ii = 0). As mentioned, the rank of the Euclidean distance matrix M is at most k + 2 (i.e., rank(M) ≤ k + 2). Also, one can show that a matrix D ∈ R n×n is a Euclidean distance matrix if and only if D = D T and BIB002 where h = [1 1 · · · 1] T ∈ R n . Using these, the problem to recover the Euclidean distance matrix M can be formulated as Let Y = ZZ T where Z = [z 1 · · · z n ] T ∈ R n×k is the matrix of sensor locations. Then, one can easily check that Thus, by letting g(Y) = diag(Y)h T + hdiag(Y) T − 2Y, the problem in (57) can be equivalently formulated as Since the feasible set associated with the problem in (59) is a smooth Riemannian manifold BIB001 , BIB004 , an extension of the Euclidean space on which a notion of differentiation exists BIB003 , , various gradient-based optimization techniques such as steepest descent, Newton method, and conjugate gradient algorithms can be applied to solve (59) BIB005 , BIB006 , BIB003 .
|
Low-Rank Matrix Completion: A Contemporary Survey <s> 3) CONVOLUTIONAL NEURAL NETWORK BASED MATRIX COMPLETION <s> We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian $\L$. Given a wavelet generating kernel $g$ and a scale parameter $t$, we define the scaled wavelet operator $T_g^t = g(t\L)$. The spectral graph wavelets are then formed by localizing this operator by applying it to an indicator function. Subject to an admissibility condition on $g$, this procedure defines an invertible transform. We explore the localization properties of the wavelets in the limit of fine scales. Additionally, we present a fast Chebyshev polynomial approximation algorithm for computing the transform that avoids the need for diagonalizing $\L$. We highlight potential applications of the transform through examples of wavelets on graphs corresponding to a variety of different problem domains. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) CONVOLUTIONAL NEURAL NETWORK BASED MATRIX COMPLETION <s> This paper proposes AutoRec, a novel autoencoder framework for collaborative filtering (CF). Empirically, AutoRec's compact and efficiently trainable model outperforms state-of-the-art CF techniques (biased matrix factorization, RBM-CF and LLORMA) on the Movielens and Netflix datasets. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) CONVOLUTIONAL NEURAL NETWORK BASED MATRIX COMPLETION <s> Deep Learning's recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. ::: In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) CONVOLUTIONAL NEURAL NETWORK BASED MATRIX COMPLETION <s> In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) CONVOLUTIONAL NEURAL NETWORK BASED MATRIX COMPLETION <s> We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin. <s> BIB005 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) CONVOLUTIONAL NEURAL NETWORK BASED MATRIX COMPLETION <s> Matrix completion models are among the most common formulations of recommender systems. Recent works have showed a boost of performance of these techniques when introducing the pairwise relationships between users/items in the form of graphs, and imposing smoothness priors on these graphs. However, such techniques do not fully exploit the local stationarity structures of user/item graphs, and the number of parameters to learn is linear w.r.t. the number of users and items. We propose a novel approach to overcome these limitations by using geometric deep learning on graphs. Our matrix completion architecture combines graph convolutional neural networks and recurrent neural networks to learn meaningful statistical graph-structured patterns and the non-linear diffusion process that generates the known ratings. This neural network system requires a constant number of parameters independent of the matrix size. We apply our method on both synthetic and real datasets, showing that it outperforms state-of-the-art techniques. <s> BIB006 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) CONVOLUTIONAL NEURAL NETWORK BASED MATRIX COMPLETION <s> Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches. <s> BIB007 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 3) CONVOLUTIONAL NEURAL NETWORK BASED MATRIX COMPLETION <s> We consider the problem of channel estimation for millimeter wave (mmWave) systems, where, to minimize the hardware complexity and power consumption, an analog transmit beamforming and receive combining structure with only one radio frequency (RF) chain at the base station (BS) and mobile station (MS) is employed. Most existing works for mmWave channel estimation exploit sparse scattering characteristics of the channel. In addition to sparsity, mmWave channels may exhibit angular spreads over the angle of arrival (AoA), angle of departure (AoD), and elevation domains. In this paper, we show that angular spreads give rise to a useful low-rank structure that, along with the sparsity, can be simultaneously utilized to reduce the sample complexity, i.e. the number of samples needed to successfully recover the mmWave channel. Specifically, to effectively leverage the joint sparse and low-rank structure, we develop a two-stage compressed sensing method for mmWave channel estimation, where the sparse and low-rank properties are respectively utilized in two consecutive stages, namely, a matrix completion stage and a sparse recovery stage. Our theoretical analysis reveals that the proposed two-stage scheme can achieve a lower sample complexity than a direct compressed sensing method that exploits only the sparse structure of the mmWave channel. Simulation results are provided to corroborate our theoretical results and to show the superiority of the proposed two-stage method. <s> BIB008
|
In recent years, approaches to use CNN to solve the LRMC problem have been proposed. These approaches are particular useful when a desired low-rank matrix is expressed in a graph structure (e.g., the recommendation matrix with a user graph to express the similarity between users' rating results) BIB002 - BIB004 . The main idea of CNN-based LRMC algorithms is to express the low-rank matrix as a graph structure and then apply CNN to the constructed graph to recover the desired matrix. Graphical Model of a Low-Rank Matrix: Suppose M ∈ R n 1 ×n 2 is the rating matrix in which the columns and rows are indexed by users and products, respectively. The first step of the CNN-based LRMC algorithm is to model the column and row graphs of M using the correlations between its columns and rows. Specifically, in the column graph G c of M, users are represented as vertices, and two vertices i and j are connected by an undirected edge if the correlation ρ ij = | m i ,m j | m i 2 m j 2 between the i and j-th columns of M is larger than the pre-determined threshold . Similarly, we construct the row graph G r of M by denoting each row (product) of M as a vertex and then connecting strongly correlated vertices. To express the connection, we define the adjacency matrix of each graph. The adjacency matrix W c = (w c ij ) ∈ R n 2 ×n 2 of the column graph G c is defined as 1 if the vertices (users) i and j are connected 0 otherwise The adjacency matrix W r ∈ R n 1 ×n 2 of the row graph G r is defined in a similar way. CNN-based LRMC: Let U ∈ R n 1 ×r and V ∈ R n 2 ×r be matrices such that M = UV T . The primary task of the CNN-based approach is to find functions f r and f c mapping the vertex sets of the row and column graphs G r and G c of M to U and V, respectively. Here, each vertex of G r (respective G c ) is mapped to each row of U (respective V) by f r (respective f c ). Since it is difficult to express f r and f c explicitly, we can learn these nonlinear mappings using CNN-based models. In the CNN-based LRMC approach, U and V are initialized at random and updated in each iteration. Specifically, U and V are updated to minimize the following loss function BIB006 : where τ is a regularization parameter. In other words, we find U and V such that the Euclidean distance between the connected vertices is minimized (see u i − u j 2 (w r ij = 1) and v i − v j 2 (w c ij = 1) in (61)). The update procedures of U and V are [67]: 1) Initialize U and V at random and assign each row of U and V to each vertex of the row graph G r and the column graph G c , respectively. 2) Extract the feature matrices U and V by performing a graph-based convolution operation on G r and G c , respectively. 3) Update U and V using the feature matrices U and V, respectively. 4) Compute the loss function in (61) using updated U and V and perform the back propagation to update the filter parameters. 5) Repeat the above procedures until the value of the loss function is smaller than a pre-chosen threshold. One important issue in the CNN-based LRMC approach is to define a graph-based convolution operation to extract the feature matrices U and V (see the second step). Note that the input data G r and G c do not lie on regular lattices like images and thus classical CNN cannot be directly applied to G r and G c . One possible option is to define the convolution operation in the Fourier domain of the graph. In recent years, CNN models based on the Fourier transformation of graph-structure data have been proposed - BIB007 . In , an approach to use the eigendecomposition of the Laplacian has been proposed. To further reduce the model complexity, CNN models using the polynomial filters have been proposed BIB003 - BIB005 . In essence, the Fourier transform of a graph can be computed using the (normalized) graph Laplacian. Let R r be the graph Laplacian of G r (i.e., R r = BIB001 . Then, the graph Fourier transform F r (u) of a vertex assigned with the vector u is defined as where R r = Q r r Q T r is an eigen-decomposition of the graph Laplacian R r BIB001 . Also, the inverse graph Fourier transform Let z be the filter used in the convolution, then the output u of the graph-based convolution on a vertex assigned with the vector u is defined as BIB001 , BIB004 From (62) and (63), BIB002 can be expressed as where G = diag(F r (z)) is the matrix of filter parameters defined in the graph Fourier domain. We next update U and V using the feature matrices U and V. In BIB006 , a cascade of multi-graph CNN followed by long short-term memory (LSTM) recurrent neural network BIB008 One can easily check that F −1 r (F r (u)) = u and F r (F −1 r (u )) = u . has been proposed. The computational cost of this approach is O(r| | + r 2 n 1 + r 2 n 2 ) which is much lower than the SVD-based LRMC techniques (i.e., O(rn 1 n 2 )) as long as r min(n 1 , n 2 ). Finally, we compute the loss function l(U i , V i ) in and then update the filter parameters using the backpropagation. Suppose {U i } i and {V i } i converge to U and V, respectively, then the estimate of M obtained by the CNN-based LRMC is M = U V T .
|
Low-Rank Matrix Completion: A Contemporary Survey <s> 4) ATOMIC NORM MINIMIZATION <s> The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). ::: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. ::: BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) ATOMIC NORM MINIMIZATION <s> In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases such as sparse vectors and low-rank matrices, as well as several others including sums of a few permutations matrices, low-rank tensors, orthogonal matrices, and atomic measures. The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery. Thus this work extends the catalog of simple models that can be recovered from limited linear information via tractable convex programming. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) ATOMIC NORM MINIMIZATION <s> The sub-Nyquist estimation of line spectra is a classical problem in signal processing, but currently popular subspace-based techniques have few guarantees in the presence of noise and rely on a priori knowledge about system model order. Motivated by recent work on atomic norms in inverse problems, we propose a new approach to line spectrum estimation that provides theoretical guarantees for the mean-square-error performance in the presence of noise and without advance knowledge of the model order. We propose an abstract theory of denoising with atomic norms which is specialized to provide a convex optimization problem for estimating the frequencies and phases of a mixture of complex exponentials with guaranteed bounds on the mean-squared-error. In general, our proposed optimization problem has no known polynomial time solution, but we provide an efficient algorithm, called DAST, based on the Fast Fourier Transform that achieves nearly the same error rate. We compare DAST with Cadzow's canonical alternating projection algorithm, which performs marginally better under high signal-to-noise ratios when the model order is known exactly, and demonstrate experimentally that DAST outperforms other denoising techniques, including Cadzow's, over a wide range of signal-to-noise ratios. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) ATOMIC NORM MINIMIZATION <s> In many signal processing applications, the aim is to reconstruct a signal that has a simple representation with respect to a certain basis or frame. Fundamental elements of the basis known as “atoms” allow us to define “atomic norms” that can be used to formulate convex regularizations for the reconstruction problem. Efficient algorithms are available to solve these formulations in certain special cases, but an approach that works well for general atomic norms, both in terms of speed and reconstruction accuracy, remains to be found. This paper describes an optimization algorithm called CoGEnT that produces solutions with succinct atomic representations for reconstruction problems, generally formulated with atomic-norm constraints. CoGEnT combines a greedy selection scheme based on the conditional gradient approach with a backward (or “truncation”) step that exploits the quadratic nature of the objective to reduce the basis size. We establish convergence properties and validate the algorithm via extensive numerical experiments on a suite of signal processing applications. Our algorithm and analysis also allow for inexact forward steps and for occasional enhancements of the current representation to be performed. CoGEnT can outperform the basic conditional gradient method, and indeed many methods that are tailored to specific applications, when the enhancement and truncation steps are defined appropriately. We also introduce several novel applications that are enabled by the atomic-norm framework, including tensor completion, moment problems in signal processing, and graph deconvolution. <s> BIB004 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> 4) ATOMIC NORM MINIMIZATION <s> As a paradigm to recover the sparse signal from a small set of linear measurements, compressed sensing (CS) has stimulated a great deal of interest in recent years. In order to apply the CS techniques to wireless communication systems, there are a number of things to know and also several issues to be considered. However, it is not easy to grasp simple and easy answers to the issues raised while carrying out research on CS. The main purpose of this paper is to provide essential knowledge and useful tips and tricks that wireless communication researchers need to know when designing CS-based wireless systems. First, we present an overview of the CS technique, including basic setup, sparse recovery algorithm, and performance guarantee. Then, we describe three distinct subproblems of CS, viz., sparse estimation, support identification, and sparse detection, with various wireless communication applications. We also address main issues encountered in the design of CS-based wireless communication systems. These include potentials and limitations of CS techniques, useful tips that one should be aware of, subtle points that one should pay attention to, and some prior knowledge to achieve better performance. Our hope is that this paper will be a useful guide for wireless communication researchers and even non-experts to get the gist of CS techniques. <s> BIB005
|
In ADMiRA, a low-rank matrix can be represented using a small number of rank-one matrices. Atomic norm minimization (ANM) generalizes this idea for arbitrary data in which the data is represented using a small number of basis elements called atom. Example of ANM include sound navigation ranging systems BIB002 and line spectral estimation BIB003 . To be specific, let X = r i=1 α i H i be a signal with k distinct frequency components H i ∈ C n 1 ×n 2 . Then the atom is defined as is the steering vector and b i ∈ C n 2 is the vector of normalized coefficients (i.e., b i 2 = 1). We denote the set of such atoms H i as H. Using H, the atomic norm of X is defined as Note that the atomic norm X H is a generalization of the 1 -norm and also the nuclear norm to the space of sinusoidal signals BIB005 , BIB003 . Let X o be the observation of X, then the problem to reconstruct X can be modeled as the ANM problem: where τ > 0 is a regularization parameter. By using [87, Theorem 1], we have and the equivalent expression of the problem (68) is Note that the problem (70) can be solved via the SDP solver (e.g., SDPT3 ) or greedy algorithms BIB001 , BIB004 .
|
Low-Rank Matrix Completion: A Contemporary Survey <s> IV. NUMERICAL EVALUATION <s> Compressed sensing aims to undersample certain high-dimensional signals, yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity-undersampling tradeoff is achieved when reconstructing by convex optimization -- which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity-undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity-undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity-undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity-undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this new, apparently very different theoretical formalism. <s> BIB001
|
In this section, we study the performance of the LRMC algorithms. In our experiments, we focus on the algorithm listed in Table 5 . The original matrix is generated by the product of two random matrices A ∈ R n 1 ×r and B ∈ R n 2 ×r , TABLE 5 . Summary of the LRMC algorithms. The rank of a desired low-rank matrix is r and n = max(n 1 , n 2 ). i.e., M = AB T . Entries of these two matrices, a ij and b pq are identically and independently distributed random variables sampled from the normal distribution N (0, 1). Sampled elements are also chosen at random. The sampling ratio p is defined as where | | is the cardinality (number of elements) of . In the noisy scenario, we use the additive noise model where the observed matrix M o is expressed as M o = M + N where the noise matrix N is formed by the i.i.d. random entries sampled from the Gaussian distribution N (0, σ 2 ). For given SNR, σ 2 = 1 n 1 n 2 M 2 F 10 − SNR 10 . Note that the parameters of the LRMC algorithm are chosen from the reference paper. For each point of the algorithm, we run 1, 000 independent trials and then plot the average value. In the performance evaluation of the LRMC algorithms, we use the mean square error (MSE) and the exact recovery ratio, which are defined, respectively, as number of successful trials total trials , VOLUME 7, 2019 TABLE 6. MSE results for different problem sizes where rank(M) = 5, and p = 2 × DOF. where M is the reconstructed low-rank matrix. We say the trial is successful if the MSE performance is less than the threshold . In our experiments, we set = 10 −6 . Here, R can be used to represent the probability of successful recovery. We first examine the exact recovery ratio of the LRMC algorithms in terms of the sampling ratio and the rank of M. In our experiments, we set n 1 = n 2 = 100 and compute the phase transition BIB001 of the LRMC algorithms. Note that the phase transition is a contour plot of the success probability P (we set P = 0.5) where the sampling ratio (x-axis) and the rank (y-axis) form a regular grid of the x-y plane. The contour plot separates the plane into two areas: the area above the curve is one satisfying P < 0.5 and the area below the curve is a region achieving P > 0.5 BIB001 (see Fig. 8 ). The higher the curve, therefore, the better the algorithm would be. In general, the LRMC algorithms perform poor when the matrix has a small number of observed entries and the rank is large. Overall, NNM-based algorithms perform better than FNM-based algorithms. In particular, the NNM technique using SDPT3 solver outperforms the rest because the convex optimization technique always finds a global optimum while other techniques often converge to local optimum. In order to investigate the computational efficiency of LRMC algorithms, we measure the running time of each algorithm as a function of rank (see Fig. 9 ). The running time is measured in second, using a 64-bit PC with an Intel i5-4670 CPU running at 3.4 GHz. We observe that the convex algorithms have a relatively high running time complexity. We next examine the efficiency of the LRMC algorithms for the different problem size (see Table 6 ). For iterative LRMC algorithms, we set the maximum number of iteration to 300. We see that LRMC algorithms such as SVT, IRLS-M, ASD, ADMiRA, and LRGeomCG run fast. For example, it takes less than a minute for these algorithms to reconstruct 1000 × 1000 matrix, while the running time of SDPT3 solver is more than 5 minutes. Further reduction of the running time can be achieved using the alternating projection-based algorithms such as LMaFit. For example, it takes about one second to reconstruct an (1000 × 1000)dimensional matrix with rank 5 using LMaFit. Therefore, when the exact recovery of the original matrix is unnecessary, the FNM-based technique would be a good choice. In the noisy scenario, we also observe that FNM-based algorithms perform well (see Fig. 10 and Fig. 11 ). In this experiment, we compute the MSE of LRMC algorithms against the rank of the original low-rank matrix for different setting of SNR (i.e., SNR = 20 and 50 dB). We observe that in the low and mid SNR regime (e.g., SNR = 20 dB), TNNR-ADMM performs comparable to the NNM-based algorithms since the FNM-based cost function is robust to the noise. In the high SNR regime (e.g., SNR = 50 dB), the convex algorithm (NNM with SDPT3) exhibits the best performance in term of the MSE. The performance of TNNR-ADMM is notably better than that of the rest of LRMC algorithms. For example, given rank(M) = 20, the MSE of TNNR-ADMM is around 0.04, while the MSE of the rest is higher than 1. Finally, we apply LRMC techniques to recover images corrupted by impulse noise. In this experiment, we use 256 × 256 standard grayscale images (e.g., boat, cameraman, lena, and pepper images) and the salt-and-pepper noise model with different noise density ρ = 0.3, 0.5, and 0.7. For the FNM-based LRMC techniques, the rank is given by the number of the singular values σ i being greater than a relative threshold > 0, i.e., σ i > max i σ i . From the simulation results, we observe that peak SNR (pSNR), defined as the ratio of the maximum pixel value of the image to noise variance, of all LRMC techniques is at least 52dB when ρ = 0.3 (see Table 7 ). In particular, NNM using SDPT3, SVT, and IRLS-M outperform the rest, achieving pSNR≥ 57 dB even with high noise level ρ = 0.7.
|
Low-Rank Matrix Completion: A Contemporary Survey <s> APPENDIX A PROOF OF THE SDP FORM OF NNM <s> The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> APPENDIX A PROOF OF THE SDP FORM OF NNM <s> In this paper, we present a flexible low-rank matrix completion (LRMC) approach for topological interference management (TIM) in the partially connected $K$ -user interference channel. No channel state information (CSI) is required at the transmitters except the network topology information. The previous attempt on the TIM problem is mainly based on its equivalence to the index coding problem, but so far only a few index coding problems have been solved. In contrast, in this paper, we present an algorithmic approach to investigate the achievable degrees-of-freedom (DoFs) by recasting the TIM problem as an LRMC problem. Unfortunately, the resulting LRMC problem is known to be NP-hard, and the main contribution of this paper is to propose a Riemannian pursuit (RP) framework to detect the rank of the matrix to be recovered by iteratively increasing the rank. This algorithm solves a sequence of fixed-rank matrix completion problems. To address the convergence issues in the existing fixed-rank optimization methods, the quotient manifold geometry of the search space of fixed-rank matrices is exploited via Riemannian optimization. By further exploiting the structure of the low-rank matrix varieties, i.e., the closure of the set of fixed-rank matrices, we develop an efficient rank increasing strategy to find good initial points in the procedure of rank pursuit. Simulation results demonstrate that the proposed RP algorithm achieves a faster convergence rate and higher achievable DoFs for the TIM problem compared with the state-of-the-art methods. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> APPENDIX A PROOF OF THE SDP FORM OF NNM <s> We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin. <s> BIB003 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> APPENDIX A PROOF OF THE SDP FORM OF NNM <s> Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches. <s> BIB004
|
Proof: We recall that the standard form of an SDP is expressed as where C is a given matrix, and {A k } l k=1 and {b k } l k=1 are given sequences of matrices and constants, respectively. To convert the NNM problem in (11) into the standard SDP form in (71), we need a few steps. First, we convert the NNM problem in (11) into the epigraph form: 15 min X,t t subject to X * ≤ t, P (X) = P (M). BIB002 Note that min X X * = min X min t: X * ≤t t = min (X,t): X * ≤t t. Next, we transform the constraints in BIB004 to generate the standard form in BIB003 . We first consider the inequality constraint ( X * ≤ t). Note that X * ≤ t if and only if there are symmetric matrices W 1 ∈ R n 1 ×n 1 and W 2 ∈ R n 2 ×n 2 such that [21, Lemma 2] tr(W 1 ) + tr(W 2 ) ≤ 2t and W 1 X X T W 2 0. (73) Then, by denoting Y = W 1 X X T W 2 ∈ R (n 1 +n 2 )×(n 1 +n 2 ) and M = 0 n 1 ×n 1 M M T 0 n 2 ×n 2 where 0 s×t is the (s × t)-dimensional zero matrix, the problem in (72) can be reformulated as min Y,t 2t subject to tr(Y) ≤ 2t, Y 0, where P (Y) = 0 n 1 ×n 1 P (X) (P (X)) T 0 n 2 ×n 2 is the extended sampling operator. We now consider the equality constraint (P (Y) = P ( M)) in . One can easily see that this condition is equivalent to Y, e i e T j+n 1 = M, e i e T j+n 1 , (i, j) ∈ , where {e 1 , · · · , e n 1 +n 2 } be the standard ordered basis of R n 1 +n 2 . Let A k = e i e T j+n 1 and M, e i e T j+n 1 = b k for each of (i, j) ∈ . Then, Y, A k = b k , k = 1, · · · , | |, and thus (74) can be reformulated as min Y,t 2t subject to tr(Y) ≤ 2t Y 0 Y, A k = b k , k = 1, · · · , | |. (77) For example, we consider the case where the desired matrix M is given by M = 1 2 2 4 and the index set of observed entries is = {(2, 1), (2, 2)}. In this case, A 1 = e 2 e T 3 , A 2 = e 2 e T 4 , b 1 = 2, and b 2 = 4. (78) One can express (77) in a concise form as BIB001 , which is the desired result.
|
Low-Rank Matrix Completion: A Contemporary Survey <s> APPENDIX B PERFORMANCE GUARANTEE OF NNM <s> We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? ::: ::: We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <s> BIB001 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> APPENDIX B PERFORMANCE GUARANTEE OF NNM <s> This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candes and Recht, Candes and Tao, and Keshavan, Montanari, and Oh. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory. <s> BIB002 </s> Low-Rank Matrix Completion: A Contemporary Survey <s> APPENDIX B PERFORMANCE GUARANTEE OF NNM <s> This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible; but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. ::: This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n x n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nr polylog(n). <s> BIB003
|
Sketch of proof: Exact recovery of the desired low-rank matrix M can be guaranteed under the uniqueness condition of the NNM problem BIB001 , BIB003 , BIB002 . To be specific, let M = U V T be the SVD of M where U ∈ R n 1 ×r , ∈ R r×r , and V ∈ R n 2 ×r . Also, let R n 1 ×n 2 = T T ⊥ be the orthogonal decomposition in which T ⊥ is defined as the subspace of matrices whose row and column spaces are orthogonal to the row and column spaces of M, respectively. Here, T is the orthogonal complement of T ⊥ . It has been shown that M is the unique solution of the NNM problem if the following conditions hold true [22, Lemma 3.1]: 1) there exists a matrix Y = UV T + W such that P (Y) = Y, W ∈ T ⊥ , and W < 1, 2) the restriction of the sampling operation P to T is an injective (one-to-one) mapping. The establishment of Y obeying 1) and 2) are in turn conditioned on the observation model of M and its intrinsic coherence property. Under a uniform sampling model of M, suppose the coherence property of M satisfies max(µ(U), µ(V)) ≤ µ 0 , where µ 0 and µ 1 are some constants, e ij is the entry of E = UV T , and µ(U) and µ(V) are the coherences of the column and row spaces of M, respectively. where γ > 2 is some constant and n 1 = n 2 = n, then M is the unique solution of the NNM problem with probability at least 1 − βn −γ . Further, if r ≤ µ −1 0 n 1/5 , (80) can be improved to m ≥ Cµ 0 γ n 1.2 r log n with the same success probability. One direct interpretation of this theorem is that the desired low-rank matrix can be reconstructed exactly using NNM with overwhelming probability even when m is much less than n 1 n 2 .
|
Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> I. INTRODUCTION <s> This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed. <s> BIB001 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> I. INTRODUCTION <s> Wireless sensors and wireless sensor networks have come to the forefront of the scientific community recently. This is the consequence of engineering increasingly smaller sized devices, which enable many applications. The use of these sensors and the possibility of organizing them into networks have revealed many research issues and have highlighted new ways to cope with certain problems. In this paper, different applications areas where the use of such sensor networks has been proposed are surveyed <s> BIB002 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> I. INTRODUCTION <s> A wireless sensor network (WSN) has important applications such as remote environmental monitoring and target tracking. This has been enabled by the availability, particularly in recent years, of sensors that are smaller, cheaper, and intelligent. These sensors are equipped with wireless interfaces with which they can communicate with one another to form a network. The design of a WSN depends significantly on the application, and it must consider factors such as the environment, the application's design objectives, cost, hardware, and system constraints. The goal of our survey is to present a comprehensive review of the recent literature since the publication of [I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE Communications Magazine, 2002]. Following a top-down approach, we give an overview of several new applications and then review the literature on various aspects of WSNs. We classify the problems into three different categories: (1) internal platform and underlying operating system, (2) communication protocol stack, and (3) network services, provisioning, and deployment. We review the major development in these three categories and outline new challenges. <s> BIB003 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> I. INTRODUCTION <s> This book presents an in-depth study on recent advances and research in Wireless Sensor Networks (WSNs). Existing WSN applications are described, followed by discussing the ongoing research efforts on some WSNs applications that show the usefulness of sensor networks. Theoretical analysis and factors influencing protocol design are highlighted. The state-of-the-art protocol for WSN protocol stack is explored for transport, routing, data link and physical layers. Moreover, the open research issues are discussed for each of the protocol layers. Furthermore, the synchronization and localization problems in WSNs are investigated along with the existing solutions and open research issues. Finally, the existing evaluation approaches for WSNs including physical testbeds and software simulation environments are overviewed. <s> BIB004 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> I. INTRODUCTION <s> Rapid advances in Wireless Sensor Networks (WSNs) indicate that they are becoming increasingly complex. Consequently users and applications are becoming more demanding. Due to unique characteristics of WSNs, like small dimensions and limited resources and capabilities, Quality of Service (QoS) is imposed as one of the key factors of WSNs. In this paper, we surveyed two main approaches for QoS provisioning in WSNs: layered and cross-layer approach. QoS provisioning with layered approach is surveyed in three WSN layers: MAC, network and transport layer. Current developments show that they can be efficiently used for QoS provisioning. However, they consider QoS only as layer specific isolated set of problems and they are highly dependent on the performance of other layers. Cross-layer approach does not have the restrictions as layered approach and hence can dispose with information from all layers of the communication protocol stack. Although it has huge potential to become the most efficient solution for QoS provisioning in WSNs, current development indicate that there are still many issues and challenges that need to be overcome. Since the concept of the QoS is relatively new in WSNs, there are not a large number of patents currently dealing with this issue, however but in coming years a large increase in the number of such patents is expected. Available patents in this domain are described in the paper. <s> BIB005
|
The WSNs are defined to be wireless networks composed of a very large number of interconnected nodes which can sense a variety of data, communicate with each other and have computation capabilities. The sensors are usually deployed into the scattered area, known as sensor field. These sensors gather data from an environment and forward it to the Base Station (BS) through multi-hops. The BS, also known as the sink, usually communicates with the users through a satellite or an internet connection BIB001 . Due to diverse and a wide range of applications Wireless Sensor Networks (WSNs) have gained considerable attention in recent years. Advances in miniaturization technologies, especially in Micro-Electro-Mechanical Systems (MEMS), have made it possible to develop Multi-functional Tiny Smart Sensors (MTSE). The MTSE now utilize WSNs and are envisioned to completely replace their conventional networks with WSNs. This will enable WSNs to become an integral part of human lives. The WSNs based on their applications can be divided into two main categories i.e. tracking and monitoring BIB003 , BIB002 . Monitoring application includes inside and outside environmental monitoring such as industrial unit and development monitoring, seismic and structural monitoring, physical condition monitoring and control monitoring. Tracking applications include vehicles, humans, animals and tracking objects. They can also be deployed for a collection of various types of data mentioned above in almost every kind of physical environments such as plain, underground and undersea sensing fields. In every situation, a sensor network gets constrained differently depending on an environment. Some of such networks are described and explained in and the references therein BIB003 . However, WSNs are still facing many challenges such as limited power, bandwidth, mobility and no central controller. The performance of any network including WSNs can be gauged, predicted and improved once the parameters characterizing the network are determined accurately. These parameters of a network include availability, bandwidth, latency, and error rate. Methods and techniques used to determine the parameters are known as Quality of Service (QoS) . At the present stage, WSNs need more attention in QoS provisioning making it a hot issue in current research. However, incorporating QoS is not an easy task usually due to a large number of nodes involved in the network BIB004 , . Some of the important aspects like energy protection, protocol designing, and architecture in WSNs are explored in details but still QoS support issues need more attention BIB005 . In figure 1 , a simple model shows that more users can always be included in the networks given that users are satisfied with the services of the network. Hence, the basic objective of the networks is how to utilize the network resources that provide QoS to users. The rest of this article is planned as follows. In section I-A we present a short summary of the QoS in WSNs while in section II we tabulated QoS-aware protocols designed for WSNs with their advantages, disadvantages and QoS parameters. Comparison and evolution of proposed protocols are made in section III where we have described briefly computational intelligence techniques for QoS managements. Final section IV includes conclusion and some new suggestions.
|
Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> 2) QoS IN WSNs <s> Practical design and performance solutions for every ad hoc wireless networkAd Hoc Wireless Networks comprise mobile devices that use wireless transmission for communication. They can be set up anywhere and any time because they eliminate the complexities of infrastructure setup and central administration-and they have enormous commercial and military potential. Now, there's a book that addresses every major issue related to their design and performance. Ad Hoc Wireless Networks: Architectures and Protocols presents state-of-the-art techniques and solutions, and supports them with easy-to-understand examples. The book starts off with the fundamentals of wireless networking (wireless PANs, LANs, MANs, WANs, and wireless Internet) and goes on to address such current topics as Wi-Fi networks, optical wireless networks, and hybrid wireless architectures. Coverage includes: Medium access control, routing, multicasting, and transport protocols QoS provisioning, energy management, security, multihop pricing, and much more In-depth discussion of wireless sensor networks and ultra wideband technology More than 200 examples and end-of-chapter problemsAd Hoc Wireless Networks is an invaluable resource for every network engineer, technical manager, and researcher designing or building ad hoc wireless networks. <s> BIB001 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> 2) QoS IN WSNs <s> Sensor networks are distributed networks made up of small sensing devices equipped with processors, memory, and short-range wireless communication. They differ from traditional computer networks in that they have resource constraints, unbalanced mixture traffic, data redundancy, network dynamics, and energy balance. Work within wireless sensor networks (WSNs) Quality of service (QoS) has been isolated and specific either on certain functional layers or application scenarios. However the area of sensor network quality of service (QoS) remains largely open. In this paper we define WSNs QoS requirements within a WSNs application, and then analyzing Issues for QoS Monitoring. <s> BIB002 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> 2) QoS IN WSNs <s> In this paper, we assess the state of the art of Quality of Services (QoS) support in wireless sensor networks (WSNs). Unlike traditional end-to-end multimedia applications, many non-end-to-end mission-critical applications envisioned for WSNs have brought forward new QoS requirements on the network. Further, unique characteristics of WSNs, such as extremely resource-constrained sensors, large-scale random deployment, and novel data-centric communication protocols, pose unprecedented challenges in the area of QoS support in WSNs. Thus, we first review the techniques for QoS support in traditional networks, analyze new QoS requirements in WSNs from a wide variety of applications classified by data delivery models, and propose some non-end-to-end collective QoS parameters. Next, the challenges of QoS support in this new paradigm are presented. Finally, we comment on current research efforts and identify many exciting open issues in order to stimulate more research interest in this largely unexplored area. Keywords— Wireless networks, wireless sensor networks, QoS, collective QoS <s> BIB003 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> 2) QoS IN WSNs <s> Wireless sensor networks (WSNs) are required to provide different levels of Quality of Services (QoS) based on the type of applications. Providing QoS support in wireless sensor networks is an emerging area of research. Due to resource constraints like processing power, memory, bandwidth and power sources in sensor networks, QoS support in WSNs is a challenging task. In this paper, we discuss the QoS requirements in WSNs and present a survey of some of the QoS aware routing techniques in WSNs. We also explore the middleware approaches for QoS support in WSNs and finally, highlight some open issues and future direction of research for providing QoS in WSNs. <s> BIB004 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> 2) QoS IN WSNs <s> Rapid advances in Wireless Sensor Networks (WSNs) indicate that they are becoming increasingly complex. Consequently users and applications are becoming more demanding. Due to unique characteristics of WSNs, like small dimensions and limited resources and capabilities, Quality of Service (QoS) is imposed as one of the key factors of WSNs. In this paper, we surveyed two main approaches for QoS provisioning in WSNs: layered and cross-layer approach. QoS provisioning with layered approach is surveyed in three WSN layers: MAC, network and transport layer. Current developments show that they can be efficiently used for QoS provisioning. However, they consider QoS only as layer specific isolated set of problems and they are highly dependent on the performance of other layers. Cross-layer approach does not have the restrictions as layered approach and hence can dispose with information from all layers of the communication protocol stack. Although it has huge potential to become the most efficient solution for QoS provisioning in WSNs, current development indicate that there are still many issues and challenges that need to be overcome. Since the concept of the QoS is relatively new in WSNs, there are not a large number of patents currently dealing with this issue, however but in coming years a large increase in the number of such patents is expected. Available patents in this domain are described in the paper. <s> BIB005
|
WSNs are used for a wide range of applications and each application has its own QoS requirements such as delay sensitivity, energy and network lifetime. QoS is an umbrella term for a group of technologies that permit network-sensitive applications to demand and receive expected services levels in terms of QoS requirements . In WSNs, QoS requirements can be specified from two perspectives BIB002 . One is called Network Specific QoS and other as Application Specific QoS. In application specific, each application has different QoS parameters such as data truthfulness, aggregation delay, fault tolerance and exposure BIB003 , . However, in WSNs every class of application also has some common requirements. So the network must fulfil the QoS needs when transmitting the sensed data from sensor field to the sink. Various data delivery models are used such as continuous, query and event driven BIB004 . Each model has its own QoS requirements. The basic QoS issues in WSNs are described below in details BIB005 , BIB004 so the generated data may be redundant. It causes the energy wastage; therefore it should be taken into the description in QoS. There are various QoS parameters and services required for different applications. For multimedia or real-time applications, the QoS metrics are jitter, latency and bandwidth. While the military applications have the security QoS parameters, the emergency and rescue applications have the availability QoS parameters and the applications such as cluster communication in meeting hall have a little energy QoS parameter. Unlike the traditional wired network, the QoS requirements are more unfair by the resources constraints of the nodes. Buffer space, processing power and battery charge are the examples of resource constraints BIB001 . QoS provisioning in individual layers depends on layer capability, so for performance evaluation and QoS assessment each layer has specific parameters that are used. The table below 1 shows the list of parameters in each layer BIB002 .
|
Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> c: MWTP <s> An important issue of research in wireless sensor networks (WSNs) is to dynamically organize the sensors into a wireless network and route the sensory data from sensors to a sink. Clustering in WSNs is an effective technique for prolonging the network lifetime. In most of the traditional routing in clustered WSNs assumes that there is no obstacle in a field of interest. Although it is not a realistic assumption, it eliminates the effects of obstacles in routing the sensory data. In this paper, we first propose a clustering technique in WSNs named energy-efficient homogeneous clustering that periodically selects the cluster heads according to a hybrid of their residual energy and a secondary parameter, such as the utility of the sensor to its neighbors. In this way, the selected cluster heads have equal number of neighbors and residual energy. We then present a route optimization technique in clustered WSNs among obstacles using Dijkstra's shortest path algorithm. We demonstrate that our work reduces the average hop count, packet delay, and energy-consumption of WSNs. <s> BIB001 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> c: MWTP <s> Energy efficiency is a crucial design issue in energy-constrained wireless networks to ensure uninterrupted information exchange. With the requirement of average end-to-end bit error rate (BER), the problem of joint routing and power allocation optimisation in multihop wireless network is studied here for two network power management policies: one is to minimise overall transmit power and the other is to maximise network lifetime. The proposed minimum total power strategy is optimal for minimising the overall transmit power consumption in the network and is preferable in network with tethered energy resources. To maximise the lifetime of a network with battery-operated nodes, the authors have developed two residual energy-aware joint routing and power allocation strategies: path lifetime maximisation (PLM) strategy and minimum weighted total power (MWTP) strategy. A distributed implementation for these strategies is also presented. Simulation results demonstrate that the proposed strategies achieve significant power saving and prolong network lifetime considerably over traditional routing algorithm with individual link BER constraint. It has also been shown that at reasonably low route refresh interval, the MWTP strategy performs best in terms of network lifetime, whereas at very high route refresh interval, the PLM strategy is optimal for network lifetime maximisation. <s> BIB002
|
In BIB001 organization of a sensor node in sensor field and the route of the sensor towards the sink which achieves the minimum delay in transmission is developed. Authors have proposed the energy-efficient homogeneous clustering algorithm. It chooses the cluster head periodically. For the routeing, it uses the Dijkstra's shortest path algorithm. It attains the reduced packet delay and minimum energy consumption. Gupta and Bose BIB002 have addressed the energy efficiency issue. To save the power consumption and prolong the network lifetime they have developed two residual energy-aware joint routeing and power allocation strategies.
|
Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> a: RACOON <s> In this study, Random Contention-based Resource Allocation (RACOON) medium access control (MAC) protocol is proposed to support the quality of service (QoS) for multi-user mobile wireless body area networks (WBANs). Different from existing QoS designs that focus on a single WBAN, a multiuser WBAN QoS should further consider both inter-WBAN interference and inter-WBAN priorities. Similar problems have been studied in both overlapped wireless local area networks (WLANs) and Bluetooth piconets that need QoS supports. However, these solutions are designed for non-medical transmissions that do not consider any priority scheme for medical applications. Most importantly, these studies focus on only static or low mobility networks. Network mobility of WBANs will introduce unnecessary inter-network collisions and energy waste, which are not considered by these solutions. The proposed multiuser-QoS protocol, RACOON, simultaneously satisfies the inter WBAN QoS requirements and overcomes the performance degradation caused by WBAN mobility. Simulation results verify that RACOON provides better latency and energy control, as compared with WBAN QoS protocols without considering the inter-WBAN requirements. <s> BIB001 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> a: RACOON <s> A Sensor Equipped Aquatic (SEA) swarm is a sensor cloud that drifts with water currents and enables 4-D (space and time) monitoring of local underwater events such as contaminants, marine life, and intruders. The swarm is escorted on the surface by drifting sonobuoys that collect data from the underwater sensors via acoustic modems and report it in real time via radio to a monitoring center. The goal of this study is to design an efficient anycast routing algorithm for reliable underwater sensor event reporting to any surface sonobuoy. Major challenges are the ocean current and limited resources (bandwidth and energy). In this paper, these challenges are addressed, and HydroCast, which is a hydraulic-pressure-based anycast routing protocol that exploits the measured pressure levels to route data to the surface sonobuoys, is proposed. This paper makes the following contributions: a novel opportunistic routing mechanism to select the subset of forwarders that maximizes the greedy progress yet limits cochannel interference and an efficient underwater dead end recovery method that outperforms the recently proposed approaches. The proposed routing protocols are validated through extensive simulations. <s> BIB002
|
In BIB001 , this issue of the mobility of the networks and priority scheme for medical applications are addressed. Network Lee et al. BIB002 have proposed a protocol called HydroCast. It takes the node mobility into consideration which enhances the propagation delay performance and energy consumption. It takes the wireless channel quality into consideration to improve the routeing performance under continuous node movement conditions.
|
Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> d: LRAGR <s> The mobility and energy scarcity are two main challenges of efficient routing in mobile wireless sensor networks (WSNs). However, messages should be reliably transported to the Sink with low latency in many application scenarios. To accomplish this, a hierarchical routing scheme, Latency and Reliability-aware Geographic Routing (LRGR), is proposed. Firstly, the cluster is formed considering node mobility and residual energy to tackle with the dynamic network topology and constrained energy. To ensure inter-cluster routing, several key ingredients are developed, such as indirect communication amongst adjacent cluster heads using their common gateways, aggregate path metric based on the connectivity, geographical position, residual energy and sojourn time of adjacent cluster heads. Simulation results demonstrate that LRGR can enhance the network lifetime while has lower latency and packet loss ratio when compared to VIBE. <s> BIB001 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> d: LRAGR <s> QoS-aware routing in mobile ad hoc networks (MANETs) is a major challenge due to node mobility and scarcity of resources. QoS-aware routing based on ant colony optimization (ACO) algorithms is a promising approach to overcome these problems. However, as compared to MANETs, vehicular ad hoc networks (VANETs) face additional challenges due to rapid topology change, making the estimation or prediction of QoS parameters difficult or stale. VANETs require time-critical message delivery, as late delivery may result in endangering lives. Currently existing routing protocols usually require the exchange of additional control message between neighbor nodes to compute QoS parameters. This makes the routing protocol too slow to react to fast topology change and also does not consider network congestion when forwarding a data packet. To reduce the overhead introduced to collect information from neighbor nodes and to obtain an accurate estimate of QoS parameters, we use the simple network management protocol to estimate these values locally. This paper describes a new approach for calculating QoS parameter locally and avoiding congestion during data transmission. The simulations are implemented using the network simulator ns-3, and the results show that our approach is scalable and performs well in high mobility. QoS-aware routing in mobile ad hoc networks (MANETs) is a major challenge due to node mobility and scarcity of resources. QoS-aware routing based on ant colony optimization (ACO) algorithms is a promising approach to overcome these problems. However, as compared to MANETs, vehicular ad hoc networks (VANETs) face additional challenges due to rapid topology change, making the estimation or prediction of QoS parameters difficult or stale. VANETs require time-critical message delivery, as late delivery may result in endangering lives. Currently existing routing protocols usually require the exchange of additional control message between neighbor nodes to compute QoS parameters. This makes the routing protocol too slow to react to fast topology change and also does not consider network congestion when forwarding a data packet. To reduce the overhead introduced to collect information from neighbor nodes and to obtain an accurate estimate of QoS parameters, we use the simple network management protocol to estimate these values locally. This paper describes a new approach for calculating QoS parameter locally and avoiding congestion during data transmission. The simulations are implemented using the network simulator ns-3, and the results show that our approach is scalable and performs well in high mobility. <s> BIB002
|
Rao et al. BIB001 focus on the cluster formation and mobility management. It consists of the two phases, clustering phase and the other is a routeing phase. It uses the energy efficient neighbour discovery protocol (ENDP) at the MAC layer. Low latency and reliability are the achievements of this protocol. e: ACO Y. Dawood Al-Ani and Jochen Seitz discussed the QoS issue in routing in case of the node mobility BIB002 . To address this issue authors have designed a routeing protocol that is based on the ant colony optimization (ACO) algorithms. The NS-2 simulation results show that the proposed protocol achieves the well performances in high mobility.
|
Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> d: MER <s> This paper considers the problem of power-efficient distributed estimation of vector parameters related to localized phenomena so that both sensor selection and routing structure in a Wireless Sensor Network (WSN) are jointly optimized to obtain the best possible estimation performance at a given querying node, for a given total power budget. First, we formulate our problem as an optimization problem and show that it is an NP-Hard problem. Then, we design two algorithms: a Fixed-Tree Relaxation-Based Algorithm (FTRA) and a very efficient Iterative Distributed Algorithm (IDA) to optimize the sensor selection and routing structure. We also provide a lower bound for our optimization problem and show that our IDA provides a performance that is close to this bound, and it is substantially superior to the previous approaches presented in the literature. An important result from our work is the fact that because of the interplay between communication cost and estimation gain when fusing measurements from different sensors, the traditional Shortest Path Tree (SPT) routing structure, widely used in practice, is no longer optimal. To be specific, our routing structure provides a better trade-off between the overall power efficiency and estimation accuracy. Comparing to more conventional sensor selection and fixed routing algorithms, our proposed algorithms yield a significant amount of energy saving for the same estimation accuracy. <s> BIB001 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> d: MER <s> There is a rich recent literature on information-theoretically secure communication at the physical layer of wireless networks, where secret communication between a single transmitter and receiver has been studied extensively. In this paper, we consider how single-hop physical layer security techniques can be extended to multi-hop wireless networks. We show that guaranteed security can be achieved in multi-hop networks by augmenting physical layer security techniques, such as cooperative jamming, with the higher layer network mechanisms, such as routing. Specifically, we consider the secure minimum energy routing problem, in which the objective is to compute a minimum energy path between two network nodes subject to constraints on the end-to-end communication secrecy and goodput over the path. This problem is formulated as a constrained optimization of transmission power and link selection, which is proved to be NP-hard. Nevertheless, we show that efficient algorithms exist to compute both exact and approximate solutions for the problem. In particular, we develop an exact solution of pseudo-polynomial complexity, as well as an e-optimal approximation of polynomial complexity. Simulation results are also provided to show the utility of our algorithms and quantify their energy savings compared to a combination of (standard) security-agnostic minimum energy routing and physical layer security. In the simulated scenarios, we observe that, by jointly optimizing link selection at the network layer and cooperative jamming at the physical layer, our algorithms reduce the network energy consumption by half. <s> BIB002
|
In BIB002 another algorithm that considers the issue of energy consumption and physical layer security is designed. For the security, achievement authors have used the cooperative jamming technique. The results show that energy consumption level is very good. e: XLQACF Shah and Lozano BIB001 have designed two algorithms for minimization of the energy consumption namely a Fixed-Tree Relaxation-Based Algorithm (FTRA) and a very efficient Iterative Distributed Algorithm (IDA). They optimize the route selection and achieve the energy saving metric.
|
Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> a: Indoor-LBS <s> Indoor location-based service (LBS) is generally distinguished from web services that have no physical location and user context. In particular, various resources have dynamic and frequent mobility in indoor environments. In addition, an indoor LBS includes numerous service lookups being requested concurrently and frequently from several locations, even through a network infrastructure requiring high scalability in indoor environments. The traditional centralized LBS approach needs to maintain a geographical map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor LBS platform with regional cooperation among devices. A service lookup algorithm based on the proposed distributed architecture searches for the shortest physical path to the nearest service resource. A continuous service binding mechanism guarantees a probabilistic real-time QoS regardless of dynamic and frequent mobility in a soft real-time system such as an indoor LBS. Performance evaluation of the proposed algorithm and platform is compared to the traditional centralized architecture in the experimental evaluation of scalability and real test bed environments. <s> BIB001 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> a: Indoor-LBS <s> MANET routing is critical and routing decision should be made sooner before the node leaves the network. Fast decisions always compensate network performance. In addition, most MANET routing protocols assume a friendly and cooperative environment, and hence are vulnerable to various attacks. Trust and Reputation would serve as a major solution to these problems. Learning the network characteristics and choosing right routing decisions at right times would be a significant solution. In this work, we have done an extensive survey of fault tolerant protocols and ant colony algorithms applied to routing in MANETs. We propose a QoS constrained fault tolerant ant lookahead routing algorithm which attempts to identify valid route and look-ahead route pairs which might help in choosing the alternate path in case of valid route failure. The results prove that the proposed algorithm takes better routing decisions with 20-30 percent improvement compared with existing ant colony algorithms. <s> BIB002
|
Jeong et al. BIB001 studied the traditional centralized location based services (LBS) which have the problem of traffic congestion and low scalability. They have then developed architecture called a site based self-organizing and completely spread network infrastructure. The proposed indoor LBS platform is adapted to support reliable and efficient services to users or mobile devices in dynamic indoor environments. The proposed SoSP network structural design is comprised of SoSP router. It represents the unit space and it contains four components i.e device proxy, resource manager, SR-Manger and the services agents. A user can easily request any indoor LBS from the physical resources with a mobile device using its wireless communication. Robots can also collaborate with other robots using wireless communication through SoSP router. It uses NSPQ-based services lookup and binding algorithm that searches for the shortest physical path to the nearest services resources. The proposed algorithm is compared to the traditional centralized architecture in the experimental evolution of scalability and real test bed environments. In the first experiment, the scalability is tested. When the number of lookup increases, the proposed lookup engine with NSPQ is highly efficient over time. Data transmission is also tested, the proposed algorithm guaranteeing the soft real-time QoS. It enhances scalability, decentralized fairness and robustness. Indoor LBS platform achieves the following features. It requires no centralized knowledge. The scalability level is good and it requires the zero configurations. Personnel privacy level is also good. The main shortcoming of this approach is that it does not consider throughput, energy consumption, delay parameters. This approach is not good for the small networks. Surendran and Prakash BIB002 have proposed the algorithm for routeing that is based on the ant colony algorithm. The proposed algorithm first learns the characteristics of the network then selects the route. The proposed algorithm is best for secure data transmission. Reliability is the main advantage of this protocol.
|
Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> e: CNSMR <s> Multicast routing protocols improve the network performance by optimising the parameters such as bandwidth, channel utilisation and throughput rate. In wireless sensor network, the primary multicast routing protocol is geographic multicast routing. This study proposes core network supported multicast routing (CNSMR) protocol, a stateful-based distributed multicast routing protocol for sensor networks. The proposed protocol comprises of heterogeneous nodes such as cluster head (CH) nodes, core nodes (CNs) and sensor nodes (SNs). The distinct set of nodes known as CNs have computing, storage and energy resources more than the SNs. CH nodes and CNs form the core network, and CNs with core network and SNs form the core network supported multicast tree. SNs participate in multicast routing supported by the core network and thus save the node energy. Multicast routing in the proposed core network supported multicast trees balance the load in the network and improve the network performance as compared to the existing WSN multicast routing protocols. The proposed CNSMR protocol is compared with the existing WSN multicast routing protocols such as DCME-MR, Intelligent-MR, H-GMR and OnDemand-MR. Simulation results indicate improvements in delay latency, energy save ratio, throughput rate, end-to-end packet delay, multicast control overhead ratio and packet delivery ratio for the proposed protocol. <s> BIB001 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> e: CNSMR <s> Quality of Service (QoS) in Wireless Sensor Networks (WSNs) is a challenging area of research because of the limited availability of resources in WSNs. The resources in WSNs are processing power, memory, bandwidth, energy, communication capacity, etc. Delay is an important QoS parameter for delivery of delay sensitive data in a time constraint sensor network environment. In this paper, an extended version of a delay aware routing protocol for WSNs is presented along with its performance comparison with different deployment scenarios of sensor nodes, taking IEEE802.15.4 as the underlying MAC protocol. The performance evaluation of the protocol is done by simulation using ns-2 simulator. <s> BIB002 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> e: CNSMR <s> Abstract Opportunistic routing is a new paradigm in routing for wireless sensor network which chooses the node closest to the target node for forwarding the data. It uses the broadcasting nature of wireless sensor networks. Opportunistic routing has increased the efficiency, throughput and reliability of sensor networks. Many energy saving techniques has been introduced using opportunistic routing in wireless sensor networks for increasing the network lifetime. In this article we have elaborated the basic concept of Opportunistic routing, different areas in which it has been claimed to be beneficial, some protocols their metrics and their drawbacks. <s> BIB003 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> e: CNSMR <s> Wireless body area sensor network is a special purpose wireless sensor network that, employing wireless sensor nodes in, on, or around the human body, makes it possible to measure biological parameters of a person for specific applications. One of the most fundamental concerns in wireless body sensor networks is accurate routing in order to send data promptly and properly, and therefore overcome some of the challenges. Routing protocols for such networks are affected by a large number of factors including energy, topology, temperature, posture, the radio range of sensors, and appropriate quality of service in sensor nodes. Since energy is highly important in wireless body area sensor networks, and increasing the network lifetime results in benefiting greatly from sensor capabilities, improving routing performance with reduced energy consumption presents a major challenge. This paper aims to study wireless body area sensor networks and the related routing methods. It also presents a thorough, comprehensive review of routing methods in wireless body area sensor networks from the perspective of energy. Furthermore, different routing methods affecting the parameter of energy will be classified and compared according to their advantages and disadvantages. In this paper, fundamental concepts of wireless body area sensor networks are provided, and then the advantages and disadvantages of these networks are investigated. Since one of the most fundamental issues in wireless body sensor networks is to perform routing so as to transmit data precisely and promptly, we discuss the same issue. As a result, we propose a classification of the available relevant literature with respect to the key challenge of energy in the routing process. With this end in view, all important papers published between 2000 and 2015 are classified under eight categories including `Mobility-Aware', `Thermal-Aware', `Restriction of Location and Number of Relays', `Link-aware', `Cluster- and Tree-Based', `Cross-Layer', `Opportunistic', and `Medium Access Control'. We, then, provide a full description of the statistical analysis of each category in relation to all papers, current hybrid protocols, and the type of simulators used in each paper. Next, we analyze the distribution of papers in each category during various years. Moreover, for each category, the advantages and disadvantages as well as the number of issued papers in different years are given. We also analyze the type of layer and deployment of mathematical models or algorithmic techniques in each category. Finally, after introducing certain important protocols for each category, the goals, advantages, and disadvantages of the protocols are discussed and compared with each other. <s> BIB004
|
Maddali BIB001 have designed a core network which supports multicast routeing (CNSMR) protocol. It comprises of the heterogeneous nodes. It achieves the high throughput and low latency and good channel utilization for the multicast environment. Sarkar and Murugan surveyed the routeing protocols for wireless sensor networks. In this survey, authors discuss the protocols with respect to QoS parameters. The mean weakness of this survey is that it does not show the pros and cons of each protocol. Bhuyan and Sarma BIB002 surveyed the delay aware QoS routeing protocols for WSNs and checked the performance of these protocols in the grid and random deployment of the nodes. QoS parameters such as DDR and delay are analyzed in the NS-2 simulator. Jadhav and Satao BIB003 have discussed the opportunistic routeing protocol(OPR) for WSNs. When a node wants to transmit the OPR selects the node which is closest to the target node for forwarding. It is most useful for the opportunistic communication. However, this survey does not discuss the pron and cons of each protocol. In BIB004 the routeing protocols for body area sensor networks is studied. Authors have discussed the strengths and weakness of each protocol.
|
Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> f: HOCA <s> Communication requirements for cognitive radio sensor networks (CRSN) necessitate addressing the problems posed by dynamic spectrum access (DSA) in an inherently resource-constrained sensor networks regime. In this paper, arising challenges for reliability and congestion control due to incorporation of cognitive radio capability into sensor networks are investigated along with the open research issues. Impact of DSA, i.e., activity of licensed users, intermittent spectrum sensing and spectrum handoff functionalities based on spectrum availability, on the performance of the existing transport protocols are inspected. The objective of this paper is to point out the urgent need for a novel reliability and congestion control mechanism for CRSN. To this end, CRSN challenges for transport layer are revealed and simulation experiments are performed to demonstrate the performance of the existing transport protocols in CRSN. <s> BIB001 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> f: HOCA <s> Swarm intelligence is a relatively novel field. It addresses the study of the collective behaviors of systems made by many components that coordinate using decentralized controls and self-organization. A large part of the research in swarm intelligence has focused on the reverse engineering and the adaptation of collective behaviors observed in natural systems with the aim of designing effective algorithms for distributed optimization. These algorithms, like their natural systems of inspiration, show the desirable properties of being adaptive, scalable, and robust. These are key properties in the context of network routing, and in particular of routing in wireless sensor networks. Therefore, in the last decade, a number of routing protocols for wireless sensor networks have been developed according to the principles of swarm intelligence, and, in particular, taking inspiration from the foraging behaviors of ant and bee colonies. In this paper, we provide an extensive survey of these protocols. We discuss the general principles of swarm intelligence and of its application to routing. We also introduce a novel taxonomy for routing protocols in wireless sensor networks and use it to classify the surveyed protocols. We conclude the paper with a critical analysis of the status of the field, pointing out a number of fundamental issues related to the (mis) use of scientific methodology and evaluation procedures, and we identify some future research directions. <s> BIB002 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> f: HOCA <s> Cognitive radio technology has been used to efficiently utilize the spectrum in wireless networks. Although many research studies have been done recently in the area of cognitive radio networks (CRNs), little effort has been made to propose a simulation framework for CRNs. In this paper, a simulation framework based on NS2 (CogNS) for cognitive radio networks is proposed. This framework can be used to investigate and evaluate the impact of lower layers, i.e., MAC and physical layer, on the transport and network layers protocols. Due to the importance of packet drop probability, end-to-end delay and throughput as QoS requirements in real-time reliable applications, these metrics are evaluated over CRNs through CogNS framework. Our simulations demonstrate that the design of new network and transport layer protocols over CRNs should be considered based on CR-related parameters such as activity model of primary users, sensing time and frequency. <s> BIB003 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> f: HOCA <s> This paper presents an optimization framework for a wireless sensor network whereby, in a given route, the optimal relay selection and power allocation are performed subject to signal-to-noise ratio constraints. The proposed approach determines whether a direct transmission is preferred for a given configuration of nodes, or a cooperative transmission. In the latter case, for each node, data transmission to the destination node is performed in two consecutive phases: broadcasting and relaying. The proposed strategy provides the best set of relays, the optimal broadcasting power and the optimal power values for the cooperative transmission phase. Once the minimum-energy transmission policy is obtained, the optimal routes from every node to a sink node are built-up using cooperative transmission blocks. We also present a low-complexity implementation approach of the proposed framework and provide an explicit solution to the optimization problem at hand by invoking the theory of multi-parametric programming. This technique provides the optimal solution as a function of measurable parameters in an off-line manner, and hence the on-line computational tasks are reduced to finding the parameters and evaluating simple functions. The proposed efficient approach has many potential applications in real-world problems and, to the best of the authors' knowledge, it has not been applied to communication problems before. Simulations are presented to demonstrate the efficacy of the approach. <s> BIB004 </s> Quality of Service of Routing Protocols in Wireless Sensor Networks: A Review <s> f: HOCA <s> Wireless sensor networks consist of a large number of small, low-power sensors that communicate through wireless links. Wireless sensor networks for healthcare have emerged in recent years as a result of the need to collect data about patients' physical, physiological, and vital signs in the spaces ranging from personal to hospital and availability of the low cost sensors that enables this data collection. One of the major challenges in these networks is to mitigate congestion. In healthcare applications, such as medical emergencies or monitoring vital signs of patients, because of the importance and criticality of transmitted data, it is essential to avoid congestion as much as possible (and in cases when congestion avoidance is not possible, to control the congestion). In this paper, a data centric congestion management protocol using AQM (Active Queue Managements) is proposed for healthcare applications with respect to the inherent characteristics of these applications. This study deals with end to end delay, energy consumption, lifetime and fairness. The proposed protocol which is called HOCA avoids congestion in the first step (routing phase) using multipath and QoS (Quality of Service) aware routing. And in cases where congestion cannot be avoided, it will be mitigated via an optimized congestion control algorithm. The efficiency of HOCA was evaluated using the OPNET simulator. Simulation results indicated that HOCA was able to achieve its goals. <s> BIB005
|
Rezaee et al. BIB005 addressed the issue of congestion in medical applications routeing. Authors have proposed the datacentric congestion management protocol using AQM (Active Queue Managements). It uses the multipath routeing to avoid the congestion of traffics. In BIB003 the simulation and the analytical model for the sensor nodes based on discrete time Markov chain (DTMC) is introduced. In BIB001 the existing transport protocols are discussed. In the paper, authors studied the QoS parameters such as reliability. In BIB002 surveys of QoS routeing protocols that are based on the swarm intelligence (SI) is carried out. SI is the intelligent techniques that find a path for transmission that is energy aware. In BIB004 the issue of signal to noise ratio is discussed. Optimal power allocation and route selection are the basic need for optimization. It is argued that cooperative transmission is suitable for Optimal power allocation.
|
Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> INTRODUCTION <s> Recently, wireless sensor networks (WSNs) have been used in various smart grid applications, including remote power system monitoring and control, power fraud detection, wireless automatic metering, fault diagnostics, demand response, outage detection, overhead transmission line monitoring, load control, and distribution automation. However, harsh smart grid environment propagation characteristics cause great challenges in the reliability of WSN communications in smart grid applications. To this end, the analysis of wireless link reliability and channel characterizations can help network designers to foresee the performance of the deployed WSN for specific smart grid propagation environments, and guide the network engineers to make design decisions for the channel modulation, encoding schemes, output power, and frequency band. This paper presents a detailed analysis of low power wireless link reliability in different smart grid environments, such as 500kV outdoor substation environment, indoor main power control room, and underground network transformer vaults. Specifically, the proposed analysis aims to evaluate the impact of different sensor radio parameters, such as modulation, encoding, transmission power, packet size, as well as the channel propagation characteristics of different smart grid propagation environments on the performance of the deployed sensor network in smart grid. Overall, the main objective of this paper is to help network designers quantifying the impact of the smart grid propagation environment and sensor radio characteristics on low power wireless link reliability in harsh smart grid environments. <s> BIB001 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> INTRODUCTION <s> The concept of the Internet of Things is rapidly becoming a reality, with many applications being deployed within industrial and consumer sectors. At the ‘thing’ level—devices and inter-device network communication—the core technical building blocks are generally the same as those found in wireless sensor network implementations. For the Internet of Things to continue growing, we need more plentiful resources for building intelligent devices and sensor networks. Unfortunately, current commercial devices, e.g., sensor nodes and network gateways, tend to be expensive and proprietary, which presents a barrier to entry and arguably slows down further development. There are, however, an increasing number of open embedded platforms available and also a wide selection of off-the-shelf components that can quickly and easily be built into device and network gateway solutions. The question is whether these solutions measure up to built-for-purpose devices. In the paper, we provide a comparison of existing built-for-purpose devices against open source devices. For comparison, we have also designed and rapidly prototyped a sensor node based on off-the-shelf components. We show that these devices compare favorably to built-for-purpose devices in terms of performance, power and cost. Using open platforms and off-the-shelf components would allow more developers to build intelligent devices and sensor networks, which could result in a better overall development ecosystem, lower barriers to entry and rapid growth in the number of IoT applications. <s> BIB002 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> INTRODUCTION <s> Cyberphysical systems (CPSs) are perceived as the pivotal enabler for a new era of real-time Internetbased communication and collaboration among value-chain participants, e.g., devices, systems, organizations, and humans. The CPS utilization in industrial settings is expected to revolutionize the way enterprises conduct their business from a holistic viewpoint, i.e., from shop-floor to business interactions, from suppliers to customers, and from design to support across the whole product and service lifecycle. Industrial CPS (ICPSs) blur the fabric of cyber (including business) and physical worlds and kickstart an era of systemwide collaboration and information-driven interactions among all stakeholders of the value chain. Therefore, ICPSs are expected to empower the transformation of industry and business at large to a digital, adaptive, networked, and knowledge-based industry with significant long-term impact on the economy, society, environment, and citizens. <s> BIB003 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> INTRODUCTION <s> Abstract For an industry 4.0 environment, the management and offering of services, falls over the construction of a stable and reliable sensor and actuator infrastructure. Industry 4.0 is undergoing increase advancement, infrastructure availability and public acceptance, mainly boosted by the Interconnected Things. The public acceptance drives an increase on investments, carrying an insurgence of companies competing with each other to gain market. Although lessening costs, this insurgence has brought heterogeneous infrastructures and solutions availability, challenging services providers. Among the challenges the security and different technology solutions support are of the most importance. The scattering of solutions and software code have to be conveniently gathered to avoid weak-points, eventually, a menace to be explored by hackers. This paper is a contribution in order to embraces those challenges in a new architecture framework able of supporting the creation of solutions for Smart Grid and Smart Living services providers under the industry 4.0 paradigm. The architecture framework design offers security, simplicity of implementation and maintenance, and is resilient to failures or attacks and technologically independent. Field tests are reported in order to evaluate key aspects of the proposed architecture. <s> BIB004 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> INTRODUCTION <s> Networks of sensors and actuators in automated manufacturing processes are implemented using industrial fieldbuses, where automation units and supervisory systems are also connected to exchange operational information. In the context of the incoming fourth industrial revolution, called Industry 4.0, the management of legacy facilities is a paramount issue to deal with. This paper presents a solution to enhance the connectivity of a legacy Flexible Manufacturing System, which constitutes the first step in the adoption of the Industry 4.0 concept. Such a system includes the fieldbus PROcess FIeld BUS (PROFIBUS) around which sensors, actuators, and controllers are interconnected. In order to establish effective communication between the sensors and actuators network and a supervisory system, a hardware and software approach including Ethernet connectivity is implemented. This work is envisioned to contribute to the migration of legacy systems towards the challenging Industry 4.0 framework. The experimental results prove the proper operation of the FMS and the feasibility of the proposal. <s> BIB005
|
The digital transformation that is taking place in different technological domains is derived from the penetration and expansion of the Information and Communication Technologies (ICTs) BIB005 . In the industrial environment, the Industry 4.0 is a concept of integration of industry automation, data exchange, and modern manufacturing technologies . It is also commonly referred to as the fourth industrial revolution, as a consequence of an initiative of the German government (Industrie 4.0 homepage), the Industrie 4.0. The Industry 4.0 era is envisioned to be implemented through the so-called Industrial Cyber-Physical Systems (ICPSs), which enable monitoring and control of industrial physical processes and bridge the cyber and virtual worlds BIB003 . The paradigm of Industry 4.0 involves various challenging frameworks like the aforementioned ICPSs, the Industrial Internet-of-Things (IIoT), Big Data, Cloud Computing, Smart Grids, Smart Cities, Hence, open source technology is receiving increasing attention in last years from scientists and practitioners in a multitude of different domains. For instance, the amount of devices within the IoT can be increased thanks to this type of technology BIB002 and open source projects are key accelerators for the industry adoption of IoT . At the hardware level, according to Thames and Schaefer , open source hardware (and its associated open source software) is envisioned to lead to fast and incremental updates to hardware platforms in future manufacturing processes. There are various devices of this type like Raspberry Pi, BeagleBone, Phidget, Intel Edison and Arduino. The latter one is an inexpensive singleboard micro-controller (Arduino online) and is considered as the flagship open source hardware. In fact, it is a powerful tool to develop different applications in the arenas of data acquisition, automation and engineering in general . Concerning the power scenario, renewable energy sources are expected to play a vital role in the mitigation of the greenhouse emissions effects and of the global warming. Even more, their hybridization with hydrogen generation and consumption constitutes an important research field . In particular, Smart Grids (SGs) are the next generation of power grids, emerging as the digital transformation applied to the energy industry, and being an important component of the Industry 4.0 paradigm BIB004 . SGs are defined as a modern electric power grid infrastructure for improved efficiency, reliability, and safety with smooth integration of renewable and distributed energy sources, through automated and distributed controls and modern communication and sensing technologies BIB001 . These power grids are a worthy domain where to apply open source technology . This paper aims at providing a panoramic survey of recent scientific literature reporting the use of open source hardware, namely Arduino, in advanced technological scenarios, proving its validity for control and measurement purposes. Indeed, as a consequence of the benefits associated to open source technology, its inclusion in Research and Development (R&D) projects flows in a natural manner. In this sense, Arduino is being incorporated within a project dealing with the deployment and operation of a Smart Micro-Grid and its digital replica. This will be further commented in section 4. The rest of the paper is organized as follows. The second section provides an overview of the main characteristics of the open source Arduino. Section 3 presents a survey about literature dealing with Arduino in a number of advanced scenarios. The application of Arduino for data sensing and acquisition in the context of research about a Smart Micro-Grid is reported in the fourth section. Finally, the main conclusions of the work are addressed.
|
Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> OVERVIEW OF ARDUINO CHARACTERISTICS <s> With the increasing availability of affordable open-source embedded hardware platforms, the development of low-cost programmable devices for uncountable tasks has accelerated in recent years. In this sense, the large development community that is being created around popular platforms is also contributing to the construction of Internet of Things applications, which can ultimately support the maturation of the smart-cities era. Popular platforms such as Raspberry Pi, BeagleBoard and Arduino come as single-board open-source platforms that have enough computational power for different types of smart-city applications, while keeping affordable prices and encompassing many programming libraries and useful hardware extensions. As a result, smart-city solutions based on such platforms are becoming common and the surveying of recent research in this area can support a better understanding of this scenario, as presented in this article. Moreover, discussions about the continuous developments in these platforms can also indicate promising perspectives when using these boards as key elements to build smart cities. <s> BIB001
|
This section is devoted to overview in a brief manner the most relevant features of the Arduino platform. Evidently, there is a great amount of information available in the Internet in this regard, following the principles of the open source philosophy. Arduino is essentially a micro-controller mounted on a board with the circuitry required to connect sensors and actuators in an easy manner. In other words, it is an embedded prototyping board designed for electronics projects that demand repeated execution of some tasks BIB001 . It must be noted that Arduino is not a microprocessor/computer like for example Raspberry Pi, therefore, it has not embedded operating system. Arduino chips are based on micro-controllers manufactured by Atmel, mainly of the family ATmega. It was originally designed and manufactured in Italy, in a project that started in 2005. The GNU General Public License (GPL) allows the manufacture of Arduino boards and software distribution by anyone. Some popular models are: Uno, Mega, Yun, Due, Nano, Duemilanove, Extreme, Lilypad, just to name a few. Hence, the developer is able to select the model that fits better the application to deploy. In BIB001 ) a detailed overview and comparison of different open source platforms, including Arduino, can be found. The expansion boards, called shields, provide a number of enhancements of the Arduino functionalities and resources. Some examples of shields are those devoted to data storage through Secure Digital (SD) cards, Global Positioning System (GPS) functionality, direct connection of sensors/actuators, etc. About connectivity options, there are diverse shields to support communication means both wired and wireless. Some examples or wired links are RS-232, RS-485, and Ethernet. Available wireless means are Bluetooth, WiFi, ZigBee, Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), or Radio Frequency IDentification (RFID). Figure 1 shows the aspect of an Arduino Mega and an Ethernet shield. Concerning the software, to program and configure Arduino chips the open source Integrated Development Environment (IDE) is freely available. IDE uses a programming language based on a simplified version of the C++ language. It runs in a computer to which the board must be connected via Universal Serial Bus (USB) communication. This software allows designing the code for Arduino as well as to monitor its operation through the serial port of the computer. It includes a number of in-built programs to facilitate the learning and development of the applications. Additionally, some software packages widely used in scientific and industrial environments like Matlab or LabVIEW already include communication options to exchange data with Arduino boards. For instance, the LabVIEW Interface for Arduino (LIFA) toolkit enabled the data sharing between a virtual instrument of LabVIEW and an Arduino board through an USB connection. There also exist web pages devoted to store, visualize and analyse data gathered by Arduino boards like thingspeak.com, facilitating and promoting the integration of these boards with cloud and IoT resources. Among the advantages of the Arduino, the most relevant ones are now listed: Open source nature. Schematics, code and documentation related to Arduino and to the associated shields are available in the Internet. Low-cost components.
|
Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system. <s> BIB001 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract A new data logger using the Arduino open-source electronic platform was developed to solve the current problem of monitoring photovoltaic (PV) systems at low-cost, especially in remote areas or regions in developing countries. The data logger meets all of the relevant requirements in terms of accuracy included in the International Electrotechnical Commission (IEC) standards for PV systems, with a resolution of 18-bits, including 8 analogue inputs for measuring up to 8 PV modules and/or weather sensors, 3 inputs for low-cost analogue temperature sensors and virtually unlimited inputs for digital temperature sensors. The new data logger is completely autonomous, and the prototype has achieved an initial cost of only 60 €. It was tested during a 6-month period under the harsh environmental conditions of the summer and winter in Southern Spain. The results using both the sensors and silicon reference cells indicate that the new system is reliable and exhibits comparable performance to commercial systems. This data logger is of special interest for both solar energy research and applications in developing countries, as it is both open-source and flexible. The data logger can be customised for the specific needs of each project at low-cost. The details of the specific design and its implementation are described. <s> BIB002 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract This paper presents a layered Smart Grid architecture enhancing security and reliability, having the ability to act in order to maintain and correct infrastructure components without affecting the client service. The architecture presented is based in the core of well design software engineering, standing upon standards developed over the years. The layered Smart Grid offers a base tool to ease new standards and energy policies implementation. The ZigBee technology implementation test methodology for the Smart Grid is presented, and provides field tests using ZigBee technology to control the new Smart Grid architecture approach. <s> BIB003 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> This paper develops an energy management system with integration of smart meters for electricity consumers in a smart grid context. The integration of two types of smart meters (SM) are developed: (i) consumer owned SM and (ii) distributor owned SM. The consumer owned SM runs over a wireless platform–ZigBee protocol and the distributor owned SM uses the wired environment–ModBus protocol. The SM are connected to a SCADA system (Supervisory Control And Data Acquisition) that supervises a network of Programmable Logic Controllers (PLC). The SCADA system/PLC network integrates different types of information coming from several technologies present in modern buildings. ::: ::: The developed control strategy implements a hierarchical cascade controller where inner loops are performed by local PLCs, and the outer loop is managed by a centralized SCADA system, which interacts with the entire local PLC network. ::: ::: In order to implement advanced controllers, a communication channel was developed to allow the communication between the SCADA system and the MATLAB software. <s> BIB004 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Low Cost Automation promotes cost effective reference architectures and development approaches to increase flexibility and efficiency of production operations. This has led to the adoption of open networking standards for plant floor communications. OPC-UA may help industrial companies to become Industry 4.0 or Smart Manufacturing as it enables remote access to plant information, achieving thus horizontal and vertical integration. The main goal of this work is to make vertical integration a reality by means of a low-cost CPPS architecture that provide access to process data. The use of this architecture along the whole production automation system may certainly reduce the Total Cost of Ownership (TCO). The paper describes both the hardware platform as well as the software including the proposed configuration file of the OPC-UA server. <s> BIB005 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Recent advancement in the field of electrical technology and cyber system (mainly based on the internet) has brought the key towards accepting and challenging all the issues regarding load management, demand side management, energy management, and efficient allocation of energy, transparency and ease of use of data management and security. This research paper focuses on a Cyber Physical Power System (CPPS), which deals with a Microgrid involving several distributed energy resources that is concerned with controlling the sources and loads by making a proper management between supply and demand, security, robustness and resiliency. A cloud-based centralized control system has also been introduced that provides the infrastructure to offload the heavy computational tasks to be completed by VM, which results in flexibility, predictability and mobility of the system. This research study focuses an ongoing project of Lamar Renewable Energy Microgrid Lab. <s> BIB006 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract The developments on machine-to-machine systems are interesting for control education, not only for the opportunities to apply control and automation solutions to new problems, but also for the availability of new hardware, software and communication platforms. These technologies facilitate a low-cost and easier integration of physical equipment in educational tools such as the remote laboratories. This paper proposes the use of a lightweight protocol for communication with resource-constrained devices, MQTT, as an aid to integrate new devices in educational applications, specifically in those that use web standards such as Javascript to provide interactive user interfaces. To evaluate this approach, an educational application focused on the control of a DC motor position loop, built with EjsS, was developed. This tool uses the MQTT protocol to parametrize and communicate with an Arduino microcontroller that, in turn, controls a physical setup implemented with the Feedback MS150 modular system. The proposed approach enables the easy connection of interactive educational tools to new real equipment, especially those driven by resource-constrained devices. <s> BIB007 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> The protection of Critical Infrastructures represents a fundamental topic of interest for the modern societies. The testing in real scenarios is hard to achieve due to the severe interdependence between these critical systems and the human well-being. The paper is devoted to introduce a low-cost testbed developed to emulate a Critical Infrastructure as complex cyber-physical system. Moreover, it proposes an approach to identify cyber threats exploiting both cyber and physical behavior of the system. Specifically, common tools from model based fault diagnosis and intrusion detection are applied to determine incipient threats and support for the operator in decision making. All the hardware and software developed are implemented in an open-source fashion in order to share investigation resources with the scientific community. <s> BIB008 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> The pervasive presence of interconnected objects enables new communication paradigms where devices can easily reach each other while interacting within their environment. The so-called Internet of Things (IoT) represents the integration of several computing and communications systems aiming at facilitating the interaction between these devices. Arduino is one of the most popular platforms used to prototype new IoT devices due to its open, flexible and easy-to-use architecture. Ardunio Yun is a dual board microcontroller that supports a Linux distribution and it is currently one of the most versatile and powerful Arduino systems. This feature positions Arduino Yun as a popular platform for developers, but it also introduces unique infection vectors from the security viewpoint. In this work, we present a security analysis of Arduino Yun. We show that Arduino Yun is vulnerable to a number of attacks and we implement a proof of concept capable of exploiting some of them. <s> BIB009 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract This paper is a formal overview of standards and patents for Internet of Things (IoT) as a key enabler for the next generation advanced manufacturing, referred as Industry 4.0 (I 4.0). IoT at the fundamental level is a means of connecting physical objects to the Internet as a ubiquitous network that enables objects to collect and exchange information. The manufacturing industry is seeking versatile manufacturing service provisions to overcome shortened product life cycles, increased labor costs, and fluctuating customer needs for competitive marketplaces. This paper depicts a systematic approach to review IoT technology standards and patents. The thorough analysis and overview include the essential standard landscape and the patent landscape based on the governing standards organizations for America, Europe and China where most global manufacturing bases are located. The literature of emerging IoT standards from the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC) and the Guobiao standards (GB), and global patents issued in US, Europe, China and World Intellectual Property Organization (WIPO) are systematically presented in this study. <s> BIB010 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract Small- and medium-sized manufacturers, as well as large original equipment manufacturers (OEMs), have faced an increasing need for the development of intelligent manufacturing machines with affordable sensing technologies and data-driven intelligence. Existing monitoring systems and prognostics approaches are not capable of collecting the large volumes of real-time data or building large-scale predictive models that are essential to achieving significant advances in cyber-manufacturing. The objective of this paper is to introduce a new computational framework that enables remote real-time sensing, monitoring, and scalable high performance computing for diagnosis and prognosis. This framework utilizes wireless sensor networks, cloud computing, and machine learning. A proof-of-concept prototype is developed to demonstrate how the framework can enable manufacturers to monitor machine health conditions and generate predictive analytics. Experimental results are provided to demonstrate capabilities and utility of the framework such as how vibrations and energy consumption of pumps in a power plant and CNC machines in a factory floor can be monitored using a wireless sensor network. In addition, a machine learning algorithm, implemented on a public cloud, is used to predict tool wear in milling operations. <s> BIB011 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract Recent technological developments have fueled a shift toward the computerization and automation of factories; i.e., Industry 4.0. Unfortunately, many small- and medium-sized factories cannot afford the sensor-embedded machines, cloud system, or high-performance computers required for Industry 4.0. Furthermore, the simple production processes in smaller factories do not require the level of precision found in large factories. In this study, we explored the idea of using inexpensive add-on triaxial sensors for the monitoring of machinery. We developed a dimensionality reduction method with low computational overhead to extract key information from the collected data as well as a neural network to enable automatic analysis of the obtained data. Finally, we performed an experiment at an actual spring factory to demonstrate the validity of the proposed algorithm. The system outlined in this work is meant to bring Industry 4.0 implementations within grasp of small to medium sized factories, by eliminating the need for sensors-embedded machines and high-performance computers. <s> BIB012 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Cyber-physical systems (CPS) are key enabling technologies for the fourth industrial revolution, referred to as Industrie 4.0 or Industry 4.0. The Reference Architecture Model Industrie 4.0 (RAMI4.0) has recently been standardized and OPC Unified Architecture (OPC UA) is listed as the sole recommendation for implementation of a communication layer. Many automation and control systems offer already implementations of OPC UA but no satisfying implementation of OPC UA was found for Arduino, a popular platform for engineering physical computing systems. This paper presents open source integration and application of a customizable OPC UA server on an Arduino Yun board using open62541, an open source and free implementation of OPC UA. The Arduino board discussed in this paper offers hot-end closed-loop temperature control for a 3D printer but the temperature set value and control parameters can be manipulated and requested via OPC UA using OPC UA clients. The application is verified using Prosys OPC UA Client and UaExpert. The results of our research can be used for developing open source cyber-physical systems without specialized knowledge in microcontroller programming, bringing Industry 4.0 applications into classrooms without effort. <s> BIB013 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> The objective of this paper is to develop Arduino-based multi-agent system (MAS) for advanced distributed energy management of a solar-wind micro-grid. High penetration of renewable energy resources needs new coordination and control approaches to meet the stochastic nature of the environment and dynamic loadings. We use multi-agent system for advanced distributed, autonomous energy management of micro-grid to dynamically and flexibly adapt to the changes in the environment as renewable energy resources are intermittent in nature. We consider that a micro-grid which contains two systems each contains solar photo voltaic (PV) system, wind generator system, local consumer, and a battery. We develop a simulation model using Java Agent Development Environment (JADE) in Eclipse IDE for dynamic energy management, which considers the intermittent nature of solar power, randomness of load, dynamic pricing of grid, and variation of critical loads, and choose the best possible action every hour to stabilize and optimize the micro-grid. Furthermore, environment variables are sensed through Arduino Mega micro-controller and given to agents of MAS. The agents take the strategic action, and the resulting actions are reflected in the LED outputs which can be readily deployed in the actual field. MAS increases responsiveness, stability, flexibility, and fault tolerance, thereby increasing operational efficiency and leading to economic and environmental optimization. All the smart grid features are tested using JADE simulations and practically verified through Arduino micro-controller to make micro-grid into smart micro-grid. <s> BIB014 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract The race to achieve smart cities is producing a continuous effort to adapt new developments and knowledge, for administrations and citizens. Information and Communications Technology are called on to be one of the key players to get these cities to use smart devices and sensors (Internet of Things) to know at every moment what is happening within the city, in order to make decisions that will improve the management of resources. The proliferation of these “smart things” is producing significant deployment of networks in the city context. Most of these devices are proprietary solutions, which do not offer free access to the data they provide. Therefore, this prevents the interoperability and compatibility of these solutions in the current smart city developments. This paper presents how to embed an open sensorized platform for both hardware and software in the context of a smart city, more specifically in a university campus. For this integration, GIScience comes into play, where it offers different open standards that allow full control over “smart things” as an agile and interoperable way to achieve this. To test our system, we have deployed a network of different sensorized platforms inside the university campus, in order to monitor environmental phenomena. <s> BIB015 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract This paper focuses on the Fallback Control System (FCS), which is an emergency response method of networked Industrial Control System (ICS) as a countermeasure for cyber-attacks. The FCS is disposed on not networked controllers but controlled objects. After some incidents happen, the FCS isolates the controlled objects from networked controllers and controls the objects safely and locally. This ICS operation switching is one-way from normal one to fallback one and the recovery switching from the fallback one to the normal one still remains open. This is because there is a possibility of cyber-attacks aiming the reconnection of the controlled objects with the network controllers. Motivated by this, this paper proposes a Fallback and Recovery Control System (FRCS) by adding a safety recovery switching to the FCS. Maintaining the fallback control of the controlled object, the virtual operation mode of FRCS connects the networked controller with the virtual controlled object (Plant Simulator). The FRCS evaluates the ICS soundness from the responses between the controller and the virtual object and then reconnects the controller with the actual one. The ICS soundness evaluation is based on the discrete-event system observer. This paper verifies the validity of the proposed recovery switching via a practical experiment. <s> BIB016 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> The cyber-physical system is a core issue of Industrie 4.0. One of the main tasks is to create a digital twin with acquired data from a physical system. In this study, a data acquisition system was constructed for the heterogeneous machines using the sensor-based I/O module. Three different types of heterogeneous machines on the shop floor were considered. The results of the study may be applied to other types of machines. In the end, the application of monitoring machine operation status was conducted. The data were archived into MySQL database for the further application. <s> BIB017 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract This paper describes a novel approach to build a modular and adaptable information platform for Chalmers Smart Industry Hub. The platform utilizes the IoT paradigm i.e. decentralized and event-driven architecture, to interconnect production modules such as an assembly system, ERP, analytics, etc. Real life industrial problems are realized as industrial demonstrators that can utilize one or several production modules to exemplify specific use cases. <s> BIB018 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract The explosion of the interest in the industry 4.0 generated a hype on both academia and business: the former is attracted for the opportunities given by the emergence of such a new field, the latter is pulled by incentives and national investment plans. The Industry 4.0 technological field is not new but it is highly heterogeneous (actually it is the aggregation point of more than 30 different fields of the technology). For this reason, many stakeholders feel uncomfortable since they do not master the whole set of technologies, they manifested a lack of knowledge and problems of communication with other domains. Actually such problem is twofold, on one side a common vocabulary that helps domain experts to have a mutual understanding is missing Riel et al. [1], on the other side, an overall standardization effort would be beneficial to integrate existing terminologies in a reference architecture for the Industry 4.0 paradigm Smit et al. [2]. One of the basics for solving this issue is the creation of shared semantic for industry 4.0. The paper has an intermediate goal and focuses on the development of an enriched dictionary of Industry 4.0 enabling technologies, with definitions and links between them in order to help the user in actively surfing the new domains by starting from known elements to reach the most far away from his/her background and knowledge. <s> BIB019 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract The fourth industrial revolution, also called Industry 4.0, is a new industrial age that have been gaining force and new followers around the world. The Industry 4.0 can be understood as the implementation of the smart factory to provide smart products and services that meet the consumer individual needs. Given its increasing acceptance and repercussion, a reference architecture model for Industry 4.0 (RAMI 4.0) was developed based on vertical integration, horizontal integration and end-to-end engineering. However, RAMI 4.0 initiative requires efforts in different aspects to reach the level of practical implementation. In this sense, this paper aims to present a layered architecture based on RAMI 4.0 to discover equipment to process operations according to the product requirements. The architecture must provide components for a communication between machines and products, and a service that offer a mechanism similar to the domain name system (DNS) to search the equipment to process the operation. In this architecture the equipment are storage in a structure organized hierarchically to assist the search service. The functionalities of the proposed architecture are conceptually modeled using production flow schema (PFS) and their dynamic behaviors are verified and validated by Petri net (PN) models. The architecture is applied in a modular production system to evaluate RAMI 4.0 as a guide for the development of architectures for Industry 4.0. <s> BIB020 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract Photovoltaic system is widely installed to increase the share of renewable energy as well as to reduce the environmental impact of fossil fuel based energy. Photovoltaic (PV) is one of the most potential renewable energy based power generation systems. Monitoring of PV system is very important to send information that allows owners to maintain, operate and control these systems to reduce maintenance costs and to avoid unwanted electric power disruptions. Different monitoring systems have been introduced with the time following different requirements. Circuit complexity, availability of friendly graphical user interface, easy to understand system architecture, maintenance facility and customization ability for end user differ from system to system along with cost issues. This paper provides an overview of architectures and features of various PV monitoring systems based on different methods. There are various technologies for PV monitoring and control, developed as for commercial use or research tasks. It has been seen that a large portion of the work is done on classifications, for example, Internet based Monitoring using Servers, TCP/IP, GPRS and so forth. There are various methodologies for data acquisition, for example, PLC (Power Line Communication), PIC, Reference cell, National Instruments etc. Various requirements are considered while selecting a proper monitoring system for an application. Review of various monitoring technologies with system attributes and working structures have been discussed to get a clear view of merits and demerits of existing PV monitoring systems. All the systems discussed in this paper have pros and cons, and these systems were developed following different requirements. In the end, a particular cost effective monitoring system using Arduino microcontroller has been proposed considering both research and user level requirements from perspectives of cost, availability of parts/modules and features, compatibility with sensors and end-devices etc. <s> BIB021 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> The open-source hardware movement is becoming increasingly popular due to the emergence of successful low-cost technologies, such as Arduino and Raspberry Pi, and thanks to the community of makers that actively share their creations to be freely studied, modified, and re-distributed. Numerous authors have proposed distinct ways to seize this approach for accomplishing a variety of learning goals: enabling scholars to explore scientific concepts, promoting students’ creativity, helping them to be more fluent and expressive with new technologies, and so on. This paper reports a systematic mapping study that overviews the literature on open-source hardware in education by analyzing and classifying 676 publications. The results of our work provide: 1) guidance on the published material (identifying the most relevant papers, publication sources, institutions, and countries); 2) information about the pedagogical uses of open-source hardware (showing its main educational goals, stages, and topics where it is principally applied); and 3) directions for future research. <s> BIB022 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Virtual reality (VR) offers a uniquely experience to interact with imaginary items or features by simulating a user's physical presence in a virtual environment. Recently, VR services that achieve a higher level of realism using advanced VR equipment have been attracting public attention. However, studies for interworking physical devices with digital objects in a virtual environment are still insufficient. In this paper, we propose a virtual twinning system which can provide a user-centered eidetic IoT service in a VR environment by linking physical things to virtual objects. <s> BIB023 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract Digital twins are digital representations of physical products or systems that consist of multiple models from various domains describing them on multiple scales. By means of communication, digital twins change and evolve together with their physical counterparts throughout their lifecycle. Domain-specific partial models that make up the digital twin, such as the CAD model or the degradation model, are usually well known and provide accurate descriptions of certain parts of the physical asset. However, in complex systems, the value of integrating the partial models increases because it facilitates the study of their complex behaviours which only emerge from the interactions between various parts of the system. The paper proposes that the partial models of the digital twin share a common model space that integrates them through a definition of their interrelations and acts as a bridge between the digital twin and the physical asset. The approach is illustrated in a case of a mechatronic product - a differential drive mobile robot developed as a testbed for digital twin research. It is demonstrated how the integrated models add value to different stages of the lifecycle, allowing for evaluation of performance in the design stage and real-time reflection with the physical asset during its operation. <s> BIB024 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> With the increasing availability of affordable open-source embedded hardware platforms, the development of low-cost programmable devices for uncountable tasks has accelerated in recent years. In this sense, the large development community that is being created around popular platforms is also contributing to the construction of Internet of Things applications, which can ultimately support the maturation of the smart-cities era. Popular platforms such as Raspberry Pi, BeagleBoard and Arduino come as single-board open-source platforms that have enough computational power for different types of smart-city applications, while keeping affordable prices and encompassing many programming libraries and useful hardware extensions. As a result, smart-city solutions based on such platforms are becoming common and the surveying of recent research in this area can support a better understanding of this scenario, as presented in this article. Moreover, discussions about the continuous developments in these platforms can also indicate promising perspectives when using these boards as key elements to build smart cities. <s> BIB025 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> LITERATURE SURVEY ABOUT ARDUINO IN ADVANCED SCENARIOS <s> Abstract In this work, a new low-cost and high-performance system for cells voltage monitoring and degradation studies in air-cooled polymer electrolyte fuel cells has been designed, built and validated in the laboratory under experimental conditions. This system allows monitoring in real time the cells’ voltages, the stack current and temperature in fuel cells made of up to 100 cells. The developed system consists of an acquisition system, which complies with all the recommendations and features necessary to perform degradation tests. It is a scalable configuration with a low number of components and great simplicity. Additionally, the cell voltage monitoring (CVM) system offers high rate of accuracy and high reliability and low cost in comparison with other commercial systems. In the same way, looking for an "All-in-One" solution, the acquisition hardware is accompanied by a software tool based on the "plug and play" philosophy. It allows in a simple way obtaining information from the cells and performing a degradation analysis based on the study of the polarisation curve. The different options and tools included in the CVM system permit, in a very intuitive and graphical way, detecting and quantifying the cell degradation without the need of isolating the stack from the system. Experimental tests carried out on the system validate its performance and show the great applicability of the system in cases where cell faults detection and degradation analysis are required. <s> BIB026
|
In this section, among the ever-increasing literature dealing with Arduino-based developments, recent publications devoted to advanced trends like Industry 4.0, cyber-physical approaches and so forth have been reviewed in order to illustrate the importance and suitability of Arduino. In industrial environments, diverse paradigms are involved, like Industry 4.0, ICPSs, or cybermanufacturing, therefore, Arduino boards have been widely reported as part of these scenarios. To begin with, it must be noted that Arduino has been identified as technology for Industry 4.0 and smart manufacturing by different publications BIB010 BIB018 BIB019 . In BIB020 an architecture for Industry 4.0-enabled factories is developed, where Arduino chips are used in a TCP/IP network. A fog computing framework for process monitoring and prognostics in cyber manufacturing systems is proposed in BIB011 , measuring the vibrations of rotating machinery through Arduino. Another case of usage of Arduino for machine status prediction in the Industry 4.0 era is found in BIB012 . Examples of Arduino utilization for ICPSs have been reported in BIB005 BIB013 . About robotics, interesting works dealing with robotics and Arduino can be found in BIB001 . Concerning facilities integrating Renewable Energy Sources (RES), a number of publications report the successful applications of Arduino. For instance, it has been used for data acquisition and monitoring of hydrogen fuel cells in BIB026 , of photovoltaic systems in BIB002 BIB021 , for weather sensing or as part of simulation frameworks . A special mention is devoted to Smart Grids, where Arduino devices have been used to perform measurement/sensing tasks BIB003 BIB004 BIB006 BIB014 . Scenarios closely related to Smart Grids are Smart Cities and Smart Buildings. In this context, Arduino has been pointed out as an enabling technology for developments in Smart Cities (Costa and DuranFaundez, 2018) , used for the deployment of sensors in BIB015 . Regarding Smart Buildings, Arduino has been reported as means for smart energy metering in (Viciana et al., 2018) . The impact of ICT has enabled the development of systems that are remotely accessed and managed through the network. An important example of this trend is represented by remote laboratories where a user can visualize and/or operate a physically distant facility. A number of publications address the utilization of Arduino boards to implement this type of laboratories with engineering education orientation BIB007 BIB022 or for general purposes . Cyber-security is of the utmost importance in the advanced hyper-connected setups, from modern manufacturing facilities to smart cities passing through critical infrastructures like power plants. In this sense, Arduino chips have been used to study cyber-security issues for industrial control systems in BIB008 BIB009 BIB016 . In the context of the so-called digital replicas (a virtual representation of physical assets), Arduino has been reported as part of the physical counterpart to perform measurement of different magnitudes in BIB017 BIB023 BIB024 . In order to illustrate the existing literature dealing with Arduino utilization in advanced frameworks, Table 1 summarizes the abovementioned publications. BIB001 BIB005 BIB012 BIB013 BIB010 BIB011 BIB018 BIB019 BIB020 RES and Smart Grids BIB003 BIB002 BIB004 BIB006 BIB021 Vivas et al., 2018; Smart Cities BIB015 BIB025 Viciana et al., 2018 Remote laboratories BIB007 BIB022 Cyber-security BIB009 BIB008 BIB016 Digital replica BIB017 BIB023 BIB024 On the view of the surveyed publications, it has been proven that Arduino constitutes a versatile tool very valuable even for challenging scenarios.
|
Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> APPLICATION OF ARDUINO IN R&D PROJECT ABOUT SMART MICRO-GRID <s> Abstract This work presents a smart microgrid consisting of diesel, photovoltaic (PV), and battery storage plants. One of the key features of smart grid is to provide a redundant high quality power for the consumers. In islanded microgrid, the under frequency and/or voltage collapse, caused by power deficiency, can lead to power outage. The current practice is to shed the load demand until the frequency and voltage are restored. However, the redundancy in supplying power has no meaning as long as the loads are shed. The main objective of this paper is to propose a power management system (PMS) that protects the microgrid against the load shedding. PMS is able to control the microgrid in both centralized and decentralized fashions. To prevent under frequency load shedding (UFLS), this work proposes using battery energy storage system (BESS) to compensate for the power mismatch in the islanded microgrid. A method is presented to estimate the rate of change of frequency and to calculate the power deficiency. The approximated value is exploited as the set-point to dispatch BESS. PV and battery plants are supposed to share the reactive power demand proportionally and thus regulate the voltage at the load bus. This work also suggests two outer control loops, namely, frequency restoration loop (FRL) and difference angle compensator (DAC). These loops ensure microgrid smooth transition from islanded mode to grid-connected mode. The microgrid is configured to investigate the effective utilization of exiting solar PV plant connected to distribution network in Sabah Malaysia. The microgrid is implemented in PSCAD software and tested under different scenarios. The microgrid with PMS shows operational stability and improvements in comparison with the original system. The results indicate that PMS can effectively control the microgrid in all operating modes. <s> BIB001 </s> Survey about the Utilization of Open Source Arduino for Control and Measurement Systems in Advanced Scenarios. Application to Smart Micro-Grid and Its Digital Replica. <s> APPLICATION OF ARDUINO IN R&D PROJECT ABOUT SMART MICRO-GRID <s> Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely “archipelago micro-grid (MG)”, which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO 2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO 2 emissions and operation costs in UCS and LCS. <s> BIB002
|
The present work is framed in a research project to implement a Smart Micro-Grid (SMG) integrating renewable energy sources with hydrogen and to develop its digital replica. SMGs can be defined as small scale SG which can be autonomous or grid-tied BIB001 . SMGs integrate physical elements in the power grid and cyber elements (sensor networks, communication networks, and computation core) to make the power grid operation effective BIB002 . The SMG of the aforementioned project combines photovoltaic energy and hydrogen generation/consumption to act as a self-sufficient eco-friendly energy system. A set of monocrystalline photovoltaic modules compose the Photovoltaic Subsystem (PVS). A Polymer Electrolyte Membrane Hydrogen Generator (PEM-HG) and a Polymer Electrolyte Membrane Hydrogen Fuel Cell (PEM-HFC) perform the generation and consumption of hydrogen respectively. The hydrogen is stored in a metal hydride tank whereas an electrochemical battery hosts the electrical flows, playing the role of DC Bus. Finally, DC and AC loads complete the micro-grid. A schematic diagram of the SMG is shown in Figure 3 . An Automation and Monitoring System (AMS) carries out the management and surveillance of the energy flows and interactions between the nodes of the SMG. A Programmable Logic Controller (PLC) and a Supervisory Control and Data Acquisition (SCADA) system compose the AMS together with an Arduino board and a number of sensors (temperature, irradiance, current, voltage, etc.) . The implemented energy control strategy aims to supply the loads and to produce hydrogen when a surplus of solar energy is available. To build the digital replica of the SMG, massive data gathering is required, so Arduino boards are considered a valuable tool to implement costeffective data acquisition equipment. Therefore, Arduino is being used to retrieve data which is considered non-critical for the automation/control tasks, namely environmental magnitudes like temperature and relative humidity. In the initial stage, it is being tested to measure the temperature of one of the photovoltaic modules through low-cost Lm35 sensors. In a previous stage, the retrieved data were validated through the comparison with those provided by a Pt-100 probe placed in the same module. Particularly, an Arduino MEGA 2560 has been chosen. It is based on a micro-controller ATmega2560 and has 54 digital I/O as well as 16 analogue inputs. An Ethernet shield provides Ethernet connectivity in order to share the sensor measurements with the monitoring system. Such a system is based in the package LabVIEW of National Instrument and is responsible of gathering, processing and representing the operational data of the SMG. The structure of the AMS is depicted in Figure 4 .
|
A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 1.0 INTRODUCTION <s> Discussion boards and online forums are important platforms for people to share information. Users post questions or problems onto discussion boards and rely on others to provide possible solutions and such question-related content sometimes even dominates the whole discussion board. However, to retrieve this kind of information automatically and effectively is still a non-trivial task. In addition, the existence of other types of information (e.g., announcements, plans, elaborations, etc.) makes it difficult to assume that every thread in a discussion board is about a question. We consider the problems of identifying question-related threads and their potential answers as classification tasks. Experimental results across multiple datasets demonstrate that our method can significantly improve the performance in both question detection and answer finding subtasks. We also do a careful comparison of how different types of features contribute to the final result and show that non-content features play a key role in improving overall performance. Finally, we show that a ranking scheme based on our classification approach can yield much better performance than prior published methods. <s> BIB001 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 1.0 INTRODUCTION <s> When people involve with software more in their daily lives, software companies must provide services through handling various operating questions that users request in the forums. However, different from conventional software companies, various types of difficulties, propositions and opinions could be issued by open source software users in addition to the operating questions. These difficulties, propositions and opinions are generally referred as questions in the forums. The questions, as valuable knowledge of the open source project, should be systematically managed. To manage the questions, a common strategy is to construct a FAQ in open source projects. The FAQ can reduce the volume of similar questions in the forums and prevent active forum members from wasting time on answering questions which are already handled before. Most previous literature focuses on existing FAQ retrieval instead of finding and constructing FAQ. This study, as a pioneering work, proposes a configurable and semi-automatic FAQ finding process to assist forum managers in constructing the FAQ. Also, two case studies are conducted to evaluate the effectiveness of the proposed FAQ finding process. <s> BIB002 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 1.0 INTRODUCTION <s> Online discussion forums have become a popular medium for users to discuss with and seek information from other users having similar interests. A typical discussion thread consists of a sequence of posts posted by multiple users. All the posts in a thread are not equally useful and serve a different purpose providing different types of information (some posts contain questions, some answers, etc.). Identifying the purpose and nature of each post in a discussion thread is an interesting research problem as it can help in improving information extraction and intelligent assistance techniques [9]. We study the problem of classifying a given post as per its purpose in the discussion thread. We employ features based on the post’s content, structure of the thread, behavior of the participating users and sentiment analysis of post’s content. We achieve decent classification performance and also analyze the relative importance of different features used for the post classification task. <s> BIB003
|
Internet forum is a web application that is becoming more and more popular. Its popularity may be attributed to the fact that it provides customer support for business enterprises that use it. Both technical and less technical issues are discussed in forums. Forum brings together experts from all walks of life. Members of a forum can make their contribution at the comfort of their homes without geographical and time zone barriers. Forums have both hierarchical and conversational structures. The hierarchical structure has to do with sub-forums emanating from the main forum, depending on the broadness of the category. For example, a computer technology forum can have hardware and software as sub-forums. The hardware sub-forum may also have motherboards, input devices and output devices as sub-forums. The conversational structure takes place within a sub-forum. A subforum is made up of threads. A thread is the minimal topical unit that addresses a specific topic. A thread is usually initiated by an author's post (usually called initial post), which constitute the topic of discussion. Members who are interested in the topic send reply posts. Figure 1 shows the structure of an Internet forum. Interaction within the forum community is naturally through question and answer scenario. It was empirically confirmed by that 90% of 40 forums investigated contain question-answer the various domains. This is because different business enterprise, which sells on the Internet need to provide customer call-centres to address customers' queries. Mined question-answer pairs can be archived to serve this purpose. This will not only reduce the cost of operating call centres but also enhance response time. Benefits of question-answer pairs are x-rayed in BIB001 BIB003 . Some of the challenges hindering effective Mining of Question-answer pairs are: Lexical chasm, Informal tone and Unfocused Topic mining. In this paper, we carry out an extensive overview of these three challenges that are limiting the potentiality of mining knowledge from Internet forum. Different approaches that researchers consider in overcoming them are explored with actions that have been taken so far to resolve them. We also proffer suggestions that can further assist in addressing the problems. Mining of human generated contents of forums is nontrivial due to its nature. The huge amount of responses and the variations of response context lead to the problems of efficient knowledge accumulation and retrieval BIB002 . Table 1 shows different forums that are serving different purposes with volume of human generated content they contain. The research activities in this domain is focusing on how to use the human generated contents reported in column 3 under the heading "Statistics" for the benefits of mankind. A good number of research activities are going on in the forum domain. Some of these research activities include retrieving relevant forum threads, clustering forum threads, finding similar threads, evaluating threads quality and mining question-answer pairs. Another type of discussion board that is becoming popular is the Community Question Answering (CQA). Some good examples of CQA are Yahoo! Answers, Stackoverflow, and Baidu a popular Chinese CQA. The CQA renders purely question answering services which are similar to that of the Internet forum. The CQA's are highly restrictive. A number of CQA's welcome purely objective contributions that do not call for too much debate from members. Members that wish to seek for subjective opinion may have to turn to Internet forum. A number of commercial question answering services like telephone answering system, chat bot, speaktoit, etc. are systems that benefit directly from automatic mining of QA pairs from CQA and Internet forums. These systems are products of Artificial Intelligence (AI). It should also be noted that the AI researchers are using the mined QA pairs to conduct Machine Learning (ML) training and testing while producing the systems. There are many other uses of QA pairs that can be found in the literature. It is on this premise that we decided to survey some of the issues that hinder effective mining of these QA pairs.
|
A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 2.0 LEXICAL CHASM IN MINING QA PAIRS <s> In this paper we present the results of a quantitative evaluation of the discrepancies between the Italian ::: and English lexica in terms of lexical gaps. This evaluation has been carried out in the context of ::: MultiWordNet, an ongoing project that aims at building a multilingual lexical database. The quantitative ::: evaluation of the English-to-Italian lexical gaps shows that the English and Italian lexica are highly ::: comparable and gives empirical support to the MultiWordNet model. <s> BIB001 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 2.0 LEXICAL CHASM IN MINING QA PAIRS <s> We describe the architecture of the AskMSR question answering system and systematically evaluate contributions of different system components to accuracy. The system differs from most question answering systems in its dependency on data redundancy rather than sophisticated linguistic analyses of either questions or candidate answers. Because a wrong answer is often worse than no answer, we also explore strategies for predicting when the question answering system is likely to give an incorrect answer. <s> BIB002 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 2.0 LEXICAL CHASM IN MINING QA PAIRS <s> Monolingual translation probabilities have recently been introduced in retrieval models to solve the lexical gap problem. They can be obtained by training statistical translation models on parallel monolingual corpora, such as question-answer pairs, where answers act as the "source" language and questions as the "target" language. In this paper, we propose to use as a parallel training dataset the definitions and glosses provided for the same term by different lexical semantic resources. We compare monolingual translation models built from lexical semantic resources with two other kinds of datasets: manually-tagged question reformulations and question-answer pairs. We also show that the monolingual translation probabilities obtained (i) are comparable to traditional semantic relatedness measures and (ii) significantly improve the results over the query likelihood and the vector-space model for answer finding. <s> BIB003 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 2.0 LEXICAL CHASM IN MINING QA PAIRS <s> Abstract Detecting answers in the threads is an essential task for the online forum oriented question-answer (QA) pair mining. In the forum threads, there normally exist implicit discussion structures with the valuable indicating information for the answer detecting models to locate the best answers. This paper proposes a thread segmentation based answer detecting approach: a forum thread is reorganized into several segments, and a group of features reflecting the discussion structures are extracted based on the segmentation results. Utilizing the segment information, a strategy is put forward to find the best answers. By evaluating the candidate answers in different types of segments with different models, the strategy filters the samples that mislead the decision. The experimental results show that our approach is promising for mining the QA resource in the online forums. <s> BIB004
|
Lexical chasm, also known as lexical gap, is one of the issues hindering effective mining of knowledge from forums BIB004 BIB002 . A lexical Chasm occurs whenever a language expresses a concept with a lexical unit whereas the other language expresses the same concept with a free combination of words BIB001 . Lexical gap problem can be attributed to different ways of writing that calls for the use of polysemy (same word with different meanings, such as "book" as in the following examples: "The book is on the table" and "I will book my flight tomorrow"), synonym (different words with the same or similar meanings, such as "agree" and "approve" as in "I agree with his going to London" and "I approve his going to London") and the use of paraphrasing. The problem is more severe when retrieving shorter documents such as sentence, question and answer retrieval in QA archives BIB003 . Human generated posts of web forum usually include a very short content, which always have much fewer sentences than that of web pages. The implication of this is that some useful models for similarity computing such as Cosine similarity, Kullback Leibler (KL) divergence and even Query Language that have yielded useful results in information retrieval become less powerful when faced with forum contents. The short contents cannot also provide enough semantic or logical information for deep language processing . In forum's question-answer detection system, it will be difficult to expect a great match between the lexical contents of question and its corresponding answer. In fact, there is often very little similarity between the tokens in a question and the one appearing in its answer. For example, a good answer to the question "Which hotel in Skudai is pet friendly?" might be "No Man's Land at Sri Pulai". The two statements have no tokens in common. Even at times the answers provided may be just a single word. For example the answer to the question "Where can I get a good clipper to buy?" can just be given as "Jusco". The relevance models that are stated above use common tokens to establish similarity. Hence, they failed to yield good results in forums. The established vocabularies for questions and answers are the same, but the probability distributions over those vocabularies are different for questions and their answers. The vocabulary mismatch and non-linkage between query and response vocabularies is often referred to as a lexical chasm. This problem between queries and documents or questions and answers has been identified as a common problem to both information retrieval and question answering BIB003 . It is even more pronounced in question answering because of the prevailing data sparseness in the domain. Bridging the lexical chasm between questions and their answers will require techniques that will move from lexical level toward semantic level. The lexical chasm problem has made it difficult to establish a good similarity between questions and answers posts. As a result of this, researchers have to find alternative approaches to relevance modelling in getting answers in forum threads. Some of these approaches and some relevant suggestions are given in the next section.
|
A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Query Expansion <s> Language Modeling (LM) has been successfully applied to Information Retrieval (IR). However, most of the existing LM approaches only rely on term occurrences in documents, queries and document collections. In traditional unigram based models, terms (or words) are usually considered to be independent. In some recent studies, dependence models have been proposed to incorporate term relationships into LM, so that links can be created between words in the same sentence, and term relationships (e.g. synonymy) can be used to expand the document model. In this study, we further extend this family of dependence models in the following two ways: (1) Term relationships are used to expand query model instead of document model, so that query expansion process can be naturally implemented; (2) We exploit more sophisticated inferential relationships extracted with Information Flow (IF). Information flow relationships are not simply pairwise term relationships as those used in previous studies, but are between a set of terms and another term. They allow for context-dependent query expansion. Our experiments conducted on TREC collections show that we can obtain large and significant improvements with our approach. This study shows that LM is an appropriate framework to implement effective query expansion. <s> BIB001 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Query Expansion <s> We present an approach to query expansion in answer retrieval that uses Statistical Machine Translation (SMT) techniques to bridge the lexical gap between questions and answers. SMT-based query expansion is done by i) using a full-sentence paraphraser to introduce synonyms in context of the entire query, and ii) by translating query terms into answer terms using a full-sentence SMT model trained on question-answer pairs. We evaluate these global, context-aware query expansion techniques on tfidf retrieval from 10 million question-answer pairs extracted from FAQ pages. Experimental results show that SMTbased expansion improves retrieval performance over local expansion and over retrieval without expansion. <s> BIB002 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Query Expansion <s> Lexical gaps between queries and questions (documents) have been a major issue in question retrieval on large online question and answer (Q&A) collections. Previous studies address the issue by implicitly expanding queries with the help of translation models pre-constructed using statistical techniques. However, since it is possible for unimportant words (e.g., non-topical words, common words) to be included in the translation models, a lack of noise control on the models can cause degradation of retrieval performance. This paper investigates a number of empirical methods for eliminating unimportant words in order to construct compact translation models for retrieval purposes. Experiments conducted on a real world Q&A collection show that substantial improvements in retrieval performance can be achieved by using compact translation models. <s> BIB003 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Query Expansion <s> The relative ineffectiveness of information retrieval systems is largely caused by the inaccuracy with which a query formed by a few keywords models the actual user information need. One well known method to overcome this limitation is automatic query expansion (AQE), whereby the user’s original query is augmented by new features with a similar meaning. AQE has a long history in the information retrieval community but it is only in the last years that it has reached a level of scientific and experimental maturity, especially in laboratory settings such as TREC. This survey presents a unified view of a large number of recent approaches to AQE that leverage various data sources and employ very different principles and techniques. The following questions are addressed. Why is query expansion so important to improve search effectiveness? What are the main steps involved in the design and implementation of an AQE component? What approaches to AQE are available and how do they compare? Which issues must still be resolved before AQE becomes a standard component of large operational information retrieval systems (e.g., search engines)? <s> BIB004
|
In mining QA pairs from forum, the query question is usually composed from relevant tokens with some of the context dropped. This scenario is a contributory factor to the problem of lexical chasm. For this reason, there has been much interest in query expansion techniques BIB001 BIB002 BIB003 . The basic query expansion technique involves adding words to the query; the words may likely be synonyms or somehow related words in the original query. The techniques used in query expansion can be classified as i) getting synonyms of words by searching for them ii) determining various morphological forms of words by stemming words in the search query iii) correcting spelling errors automatically by searching for the corrected form iv) re-weighting the terms in the original query BIB004 . A more focused expansion can be generated using questionanswer pairs' training set. All it requires is to learn a mapping between words in the query (that is, the question) and their corresponding responses (such as smoking cigarette, why because, URL website and MS Microsoft). These words are added to the query being used for the mapping to augment the original query to produce a representation that better reflects the underlying information need.
|
A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Machine Translation <s> Lexical gaps between queries and questions (documents) have been a major issue in question retrieval on large online question and answer (Q&A) collections. Previous studies address the issue by implicitly expanding queries with the help of translation models pre-constructed using statistical techniques. However, since it is possible for unimportant words (e.g., non-topical words, common words) to be included in the translation models, a lack of noise control on the models can cause degradation of retrieval performance. This paper investigates a number of empirical methods for eliminating unimportant words in order to construct compact translation models for retrieval purposes. Experiments conducted on a real world Q&A collection show that substantial improvements in retrieval performance can be achieved by using compact translation models. <s> BIB001 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Machine Translation <s> We propose a new probabilistic approach to information retrieval based upon the ideas and methods of statistical machine translation. The central ingredient in this approach is a statistical model of how a user might distill or "translate" a given document into a query. To assess the relevance of a document to a user's query, we estimate the probability that the query would have been generated as a translation of the document, and factor in the user's general preferences in the form of a prior distribution over documents. We propose a simple, well motivated model of the document-to-query translation process, and describe an algorithm for learning the parameters of this model in an unsupervised manner from a collection of documents. As we show, one can view this approach as a generalization and justification of the "language modeling" strategy recently proposed by Ponte and Croft. In a series of experiments on TREC data, a simple translation-based retrieval system performs well in comparison to conventional retrieval techniques. This prototype system only begins to tap the full potential of translation-based retrieval. <s> BIB002
|
The basic language modelling structure for retrieval which establishes similarity between a query Q and a document D may be modelled as the probability of the document language model MD built from D generating Q: Query words are often considered to occur independently in a particular document language model, as such, the querylikelihood is calculated as: where q is a query word. The probability is usually calculated using maximum likelihood estimation BIB001 . It should be noted that this basic language model structure does not address lexical gaps issue between queries and question. Information retrieval was viewed by BIB002 as statistical documentquery translation and as such added translation models to map query words to document words. The established translationbased retrieval model obtained by modelling in equation (2) above is: where w represents document word. The translation probability T(q|w) fundamentally represents the level of association between query word q and document word w captured using different machine translation setting BIB001 . The use of translation models judging from traditional information retrieval perspective, produce an implicit query expansion effect, since query words that are not found in a document are mapped to associated words in the document. A positive impact could only be made by this translation-based retrieval models if only the pre-constructed translation models have consistent translation probability distributions.
|
A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Non-Lexical Features <s> New types of document collections are being developed by various web services. The service providers keep track of non-textual features such as click counts. In this paper, we present a framework to use non-textual features to predict the quality of documents. We also show our quality measure can be successfully incorporated into the language modeling-based retrieval model. We test our approach on a collection of question and answer pairs gathered from a community based question answering service where people ask and answer questions. Experimental results using our quality measure show a significant improvement over our baseline. <s> BIB001 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Non-Lexical Features <s> Discussion boards and online forums are important platforms for people to share information. Users post questions or problems onto discussion boards and rely on others to provide possible solutions and such question-related content sometimes even dominates the whole discussion board. However, to retrieve this kind of information automatically and effectively is still a non-trivial task. In addition, the existence of other types of information (e.g., announcements, plans, elaborations, etc.) makes it difficult to assume that every thread in a discussion board is about a question. We consider the problems of identifying question-related threads and their potential answers as classification tasks. Experimental results across multiple datasets demonstrate that our method can significantly improve the performance in both question detection and answer finding subtasks. We also do a careful comparison of how different types of features contribute to the final result and show that non-content features play a key role in improving overall performance. Finally, we show that a ranking scheme based on our classification approach can yield much better performance than prior published methods. <s> BIB002 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Non-Lexical Features <s> Frequently Asked Questions (FAQ)'s tag is becoming more popular on websites. Research activities have been concentrated on its retrieval rather than construction. FAQ construction can be achieved using a number of sources. Presently, it is mostly done manually by help desk staff and this tends to make it static in nature. In this paper, a comprehensive review of various components that can guarantee effective mining of FAQ from forum threads is presented. The components encompass pre-processing, mining of questions, mining of answers and mining of the FAQ. Besides the general idea and concept, we discuss the strengths and limitations of the various techniques used in these components. In fact, the following questions are addressed in the review. What kind of pre-processing technique is needed for mining FAQ from forum? What are the recent techniques for mining questions from forum threads? What approaches are currently dominating answer retrieval from forum threads? How can we cluster out FAQ from question and answer database?. <s> BIB003
|
A much more prevalent approach of tackling lexical gaps in web forum question answering is to avoid the use of lexical data. The non-lexical features are at times referred to as structural features. Forum meta data such as authorship, answer length, normalized position of post, etc. are used in determining questions and answers. In BIB002 total number of posts and authorship were used to mine questions with a reasonable performance. A host of these features with detailed descriptions for mining questions and answers are contained in BIB003 . A major problem with nonlexical features is their availability. Some non-lexical features used by some forums may not be found in others. The degree of availability of some non-lexical features across forums can be found in . It is worth noting that combination of both the lexical and non-lexical is desirable for effective mining of question-answer pairs from forum. The lexical features measure the degree of relevance between question and answer while nonlexical can be used to estimate the quality of answers BIB001 .
|
A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 3.0 CASUAL LANGUAGE <s> Often, in the real world noise is ubiquitous in text communications. Text produced by processing signals intended for human use are often noisy for automated computer processing. Automatic speech recognition, optical character recognition and machine translation all introduce processing noise. Also digital text produced in informal settings such as online chat, SMS, emails, message boards, newsgroups, blogs, wikis and web pages contain considerable noise. In this paper, we present a survey of the existing measures for noise in text. We also cover application areas that ingest this noisy text for various tasks like Information Retrieval and Information Extraction. <s> BIB001 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 3.0 CASUAL LANGUAGE <s> A webforum is a large database of community knowledge, with information of the most recent events and developments. Unfortunately this knowledge is presented in a format easily understood by humans but not automatically by machines. However, from observing several forums for a long time it seems obvious that there are several distinct types of postings and relations between them. ::: ::: One often occurring and very annoying relation between two contributions is the near-duplicate relation. In this paper we propose a work to detect and utilize contribution relations, concentrating on near-duplication. We propose ideas on how to calculate similarity, build groups of similar threads and thus make near-duplicates in forums evident. One of the core theses is, that it is possible to apply information from forum and thread structure to improve existing near-duplicate detection approaches. In addition, the proposed work shows the qualitative and quantitative results of applying such principles, thereby finding out which features are really useful in the near-duplicate detection process. Also proposed are several sample applications, which benefit from forum near-duplicate detection. <s> BIB002 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 3.0 CASUAL LANGUAGE <s> Abstract The rapid expansion in user-generated content on the Web of the 2000s, characterized by social media, has led to Web content featuring somewhat less standardized language than the Web of the 1990s. User creativity and individuality of language creates problems on two levels. The first is that social media text is often unsuitable as data for Natural Language Processing tasks such as Machine Translation, Information Retrieval and Opinion Mining, due to the irregularity of the language featured. The second is that non-native speakers of English, older Internet users and non-members of the “in-group” often find such texts difficult to understand. This paper discusses problems involved in automatically normalizing social media English, various applications for its use, and our progress thus far in a rule-based approach to the issue. Particularly, we evaluate the performance of two leading open source spell checkers on data taken from the microblogging service Twitter, and measure the extent to which their accuracy is improved by pre-processing with our system. We also present our database rules and classification system, results of evaluation experiments, and plans for expansion of the project. <s> BIB003 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 3.0 CASUAL LANGUAGE <s> Automated clustering of threads within and across web forums will greatly benefit both users and forum administrators in efficiently seeking, managing, and integrating the huge volume of content being generated. While clustering has been studied for other types of data, little work has been done on clustering forum threads; the informal nature and special structure of forum data make it interesting to study how to effectively cluster forum threads. In this paper, we apply three state of the art clustering methods (i.e., hierarchical agglomerative clustering, k-Means, and probabilistic latent semantic analysis) to cluster forum threads and study how to leverage the structure of threads to improve clustering accuracy. We propose three different methods for assigning weights to the posts in a forum thread to achieve more accurate representation of a thread. We evaluate all the methods on data collected from three different Linux forums for both within-forum and across-forum clustering. Our results show that the state of the art methods perform reasonably well for this task, but the performance can be further improved by exploiting thread structures. In particular, a parabolic weighting method that assigns higher weights for both beginning posts and end posts of a thread is shown to consistently outperform a standard clustering method. <s> BIB004 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 3.0 CASUAL LANGUAGE <s> Frequently Asked Questions (FAQ)'s tag is becoming more popular on websites. Research activities have been concentrated on its retrieval rather than construction. FAQ construction can be achieved using a number of sources. Presently, it is mostly done manually by help desk staff and this tends to make it static in nature. In this paper, a comprehensive review of various components that can guarantee effective mining of FAQ from forum threads is presented. The components encompass pre-processing, mining of questions, mining of answers and mining of the FAQ. Besides the general idea and concept, we discuss the strengths and limitations of the various techniques used in these components. In fact, the following questions are addressed in the review. What kind of pre-processing technique is needed for mining FAQ from forum? What are the recent techniques for mining questions from forum threads? What approaches are currently dominating answer retrieval from forum threads? How can we cluster out FAQ from question and answer database?. <s> BIB005
|
Forum content generation is at times done with some laxity. Members initializing or replying a post tends to use an informal tone / language which is more closed to his/her oral habit. The informal tone is often considered in literature as unstructured casual language BIB003 . The useful information is concealed inside majority of trivial, heterogeneous, and sometimes irrelevant, text data of different quality. This attitude usually make forum content to be highly noisy BIB002 BIB004 . The noise content of forum can be said to come from two sources. These sources appear to be in line with sources identified by BIB001 for text generally: 1) noise can occur during the conversion process, when a textual representation of information is produced from some other form. For example, web pages, printed/handwritten documents, camera-captured images, spontaneous speech are all intended for human use. Their conversion into some other forms may results in noisy text. 2) Noise can also be introduced when text is generated in digital form. Most especially in informal settings such as SMS (Short Messaging Service or Texting), online chat, emails, web pages and message boards, the text produced is inherently noisy. This type of text contains spelling errors, special characters, grammar mistakes, non-standard word forms, usage of multilingual words and so on BIB001 . In forum, text normalization activities have been concentrated on the second noise source. Categorization of forum noise as contained in BIB005 is shown in Table 2 .
|
A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Casual Language Resolution Approaches <s> Research aimed at correcting words in text has focused on three progressively more difficult problems:(1) nonword error detection; (2) isolated-word error correction; and (3) context-dependent work correction. In response to the first problem, efficient pattern-matching and n -gram analysis techniques have been developed for detecting strings that do not appear in a given word list. In response to the second problem, a variety of general and application-specific spelling correction techniques have been developed. Some of them were based on detailed studies of spelling error patterns. In response to the third problem, a few experiments using natural-language-processing tools or statistical-language models have been carried out. This article surveys documented findings on spelling error patterns, provides descriptions of various nonword detection and isolated-word error correction techniques, reviews the state of the art of context-dependent word correction techniques, and discusses research issues related to all three areas of automatic error correction in text. <s> BIB001 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Casual Language Resolution Approaches <s> Web communities are web virtual broadcasting spaces where people can freely discuss anything. While such communities function as discussion boards, they have even greater value as large repositories of archived information. In order to unlock the value of this resource, we need an effective means for searching archived discussion threads. Unfortunately the techniques that have proven successful for searching document collections and the Web are not ideally suited to the task of searching archived community discussions. In this paper, we explore the problem of creating an effective ranking function to predict the most relevant messages to queries in community search. We extract a set of predictive features from the thread trees of newsgroup messages as well as features of message authors and lexical distribution within a message thread. Our final results indicate that when using linear regression with this feature set, our search system achieved a 28.5% performance improvement compared to our baseline system. <s> BIB002 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Casual Language Resolution Approaches <s> We present the first English syllabification system to improve the accuracy of letter-tophoneme conversion. We propose a novel discriminative approach to automatic syllabification based on structured SVMs. In comparison with a state-of-the-art syllabification system, we reduce the syllabification word error rate for English by 33%. Our approach also performs well on other languages, comparing favorably with published results on German and Dutch. <s> BIB003 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Casual Language Resolution Approaches <s> Letter-to-phoneme conversion plays an important role in several applications. It can be a difficult task because the mapping from letters to phonemes can be many-to-many. We present a language independent letter-to-phoneme conversion approach which is based on the popular phrase based Statistical Machine Translation techniques. The results of our experiments clearly demonstrate that such techniques can be used effectively for letter-to-phoneme conversion. Our results show an overall improvement of 5.8% over the baseline and are comparable to the state of the art. We also propose a measure to estimate the difficulty level of L2P task for a language. <s> BIB004 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Casual Language Resolution Approaches <s> Correct stress placement is important in text-to-speech systems, in terms of both the overall accuracy and the naturalness of pronunciation. In this paper, we formulate stress assignment as a sequence prediction problem. We represent words as sequences of substrings, and use the substrings as features in a Support Vector Machine (SVM) ranker, which is trained to rank possible stress patterns. The ranking approach facilitates inclusion of arbitrary features over both the input sequence and output stress pattern. Our system advances the current state-of-the-art, predicting primary stress in English, German, and Dutch with up to 98% word accuracy on phonemes, and 96% on letters. The system is also highly accurate in predicting secondary stress. Finally, when applied in tandem with an L2P system, it substantially reduces the word error rate when predicting both phonemes and stress. <s> BIB005 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Casual Language Resolution Approaches <s> Abstract The rapid expansion in user-generated content on the Web of the 2000s, characterized by social media, has led to Web content featuring somewhat less standardized language than the Web of the 1990s. User creativity and individuality of language creates problems on two levels. The first is that social media text is often unsuitable as data for Natural Language Processing tasks such as Machine Translation, Information Retrieval and Opinion Mining, due to the irregularity of the language featured. The second is that non-native speakers of English, older Internet users and non-members of the “in-group” often find such texts difficult to understand. This paper discusses problems involved in automatically normalizing social media English, various applications for its use, and our progress thus far in a rule-based approach to the issue. Particularly, we evaluate the performance of two leading open source spell checkers on data taken from the microblogging service Twitter, and measure the extent to which their accuracy is improved by pre-processing with our system. We also present our database rules and classification system, results of evaluation experiments, and plans for expansion of the project. <s> BIB006 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Casual Language Resolution Approaches <s> The use of computer mediated communication has resulted in a new form of written text--Microtext--which is very different from well-written text. Tweets and SMS messages, which have limited length and may contain misspellings, slang, or abbreviations, are two typical examples of microtext. Microtext poses new challenges to standard natural language processing tools which are usually designed for well-written text. The objective of this work is to normalize microtext, in order to produce text that could be suitable for further treatment. ::: ::: We propose a normalization approach based on the source channel model, which incorporates four factors, namely an orthographic factor, a phonetic factor, a contextual factor and acronym expansion. Experiments show that our approach can normalize Twitter messages reasonably well, and it outperforms existing algorithms on a public SMS data set. <s> BIB007
|
A number of methods from different research areas have emerged for identifying and correcting words in text. A good work by BIB001 described in details various methods for correcting spelling mistakes. A common measure for rectifying spelling errors is edit distance or Levenshtein distance. For any two character strings t 1 and t 2 , the edit distance between them is considered as the minimum number of edit operations needed to transform t 1 into t 2 . The expected edit operations are: (i) insertion of a character into a string; (ii) deletion of a character from a string and (iii) replacement of a character of a string by another character. For example, the edit distance between dog and rat is 3. The edit distance model is at times being augmented by a Language Model (LM) from the corpus of Web queries. This is based on the notion of distributional similarity BIB002 between two terms, which is high between a frequently occurring misspelling and its correction, and low between two irrelevant terms only with similar spellings. Open source dictionaries such as Aspell or Hunspell can also be used to fix some of the spelling mistakes found in forum corpora. An empirical result of BIB006 confirms the effectiveness of these open source dictionaries in correcting words in text. However, dictionaries can only correct spelling mistakes with some being able to fix phonetic errors. Noise is often modelled depending on the application. Four different noise channels, namely, Grapheme Channel, Phoneme Channel, Context Channel and Acronym Channel are proposed by BIB007 to fix the four noise classes x-rayed in Table 2 . The noise channels are described in the following four paragraphs. The grapheme channel is responsible for the spelling distortion. A way of modelling this channel is to consider it as being directly proportional to the similarity between a corrupted token and its normalization. The more similar a normalization candidate is to the corrupted token, the more likely it is the correct substitution for it. The phoneme channel is responsible for distortion in pronunciations. It is similar to the grapheme channel; the probability of a correct string being transformed into an incorrect string is proportional to the similarity between the two terms, area of difference being that the similarity in this case is measured on the phonetic representations instead of orthographic forms. A major step in phoneme is Letter-toPhoneme (L2P) conversion, which estimates the pronunciation of a term, represented as a sequence of letters. A lot of research is going on in this area of letter-to-phoneme conversion. Some notable ones are the work of BIB004 BIB005 BIB003 . After the L2P conversion, the similarity measure between two phoneme sequences becomes the same as the similarity measure implemented in the grapheme channel, the only difference is that a uniform weight Levenshtein distance is considered instead of weighted Levenshtein distance. Context channel -a context-based correction procedure would not only handle the problem of real-word errors, i.e., errors that result in another valid word, like form instead of from, but it would also be good in correcting those non-word errors that have more than one possible correction. A good example of such is the string ehre. Without context there is little reasoning one could make, some possible options to considered as the intended correction among others are here, ere, ether, where, there. Developing context-based correction procedures has become a notable challenge for automatic word recognition and error correction in text BIB001 . Correct normalization using context is often determined by considering the n-gram probability. The n-gram language model is normally trained on a large Web corpus to return probability score for a query word or phrase. Acronym Channel -the three channel models considered so far deal with word-to-word normalization. There exist a number of acronyms such as "fyi" (for your information), "asap" (as soon as possible) and "lol" (laugh out loudly) that are commonly used and involve word-to-phrase mappings. The acronym channel can then be considered as a model of one-tomany mapping.
|
A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 4.0 TOPIC DRIFT <s> Web communities are web virtual broadcasting spaces where people can freely discuss anything. While such communities function as discussion boards, they have even greater value as large repositories of archived information. In order to unlock the value of this resource, we need an effective means for searching archived discussion threads. Unfortunately the techniques that have proven successful for searching document collections and the Web are not ideally suited to the task of searching archived community discussions. In this paper, we explore the problem of creating an effective ranking function to predict the most relevant messages to queries in community search. We extract a set of predictive features from the thread trees of newsgroup messages as well as features of message authors and lexical distribution within a message thread. Our final results indicate that when using linear regression with this feature set, our search system achieved a 28.5% performance improvement compared to our baseline system. <s> BIB001 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 4.0 TOPIC DRIFT <s> Online communities are valuable information sources where knowledge is accumulated by interactions between people. Search services provided by online community sites such as forums are often, however, quite poor. To address this, we investigate retrieval techniques that exploit the hierarchical thread structures in community sites. Since these structures are sometimes not explicit or accurately annotated, we use structure discovery techniques. We then make use of thread structures in retrieval experiments. Our results show that using thread structures that have been accurately annotated can lead to significant improvements in retrieval performance compared to strong baselines. <s> BIB002 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> 4.0 TOPIC DRIFT <s> We propose a method for annotating post-to-post discourse structure in online user forum data, in the hopes of improving troubleshooting-oriented information access. We introduce the tasks of: (1) post classification, based on a novel dialogue act tag set; and (2) link classification. We also introduce three feature sets (structural features, post context features and semantic features) and experiment with three discriminative learners (maximum entropy, SVM-HMM and CRF). We achieve above-baseline results for both dialogue act and link classification, with interesting divergences in which feature sets perform well over the two sub-tasks, and go on to perform preliminary investigation of the interaction between post tagging and linking. <s> BIB003
|
Threads in Internet forum are composed by many authors. As a result, they are less coherent and more susceptible to sudden jumps in topics. The existence of several topics in a thread is something very common in popular discussions. Even if a unique topic is discussed in a thread, different features and aspects of it may be considered in the discussion. There is a need to uncover the content structure of threads so as to establish post-to-post discourse structure. Specifically, it will be better to establish which earlier post(s) a given post responds to. It has rightly been pointed out by BIB001 BIB002 that post-to-post discourse structure will enhance information retrieval. A good illustration of this problem is contained in BIB003 . Topic drift is mostly found in threads that contain many posts, say 6 and above.
|
A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Topic Drift Resolution Strategies <s> Message hierarchies in web discussion boards grow with new postings. Threads of messages evolve as new postings focus within or diverge from the original themes of the threads. Thus, just by investigating the subject headings or contents of earlier postings in a message thread, one may not be able to guess the contents of the later postings. The resulting navigation problem is further compounded for blind users who need the help of a screen reader program that can provide only a linear representation of the content. We see that, in order to overcome the navigation obstacle for blind as well as sighted users, it is essential to develop techniques that help identify how the content of a discussion board grows through generalizations and specializations of topics. This knowledge can be used in segmenting the content in coherent units and guiding the users through segments relevant to their navigational goals. Our experimental results showed that the segmentation algorithm described in this paper provides up to 80-85% success rate in labeling messages. The algorithm is being deployed in a software system to reduce the navigational load of blind students in accessing web-based electronic course materials; however, we note that the techniques are equally applicable for developing web indexing and summarization tools for users with sight. <s> BIB001 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Topic Drift Resolution Strategies <s> Text message stream is a newly emerging type of Web data which is produced in enormous quantities with the popularity of Instant Messaging and Internet Relay Chat. It is beneficial for detecting the threads contained in the text stream for various applications, including information retrieval, expert recognition and even crime prevention. Despite its importance, not much research has been conducted so far on this problem due to the characteristics of the data in which the messages are usually very short and incomplete. In this paper, we present a stringent definition of the thread detection task and our preliminary solution to it. We propose three variations of a single-pass clustering algorithm for exploiting the temporal information in the streams. An algorithm based on linguistic features is also put forward to exploit the discourse structure information. We conducted several experiments to compare our approaches with some existing algorithms on a real dataset. The results show that all three variations of the single-pass algorithm outperform the basic single-pass algorithm. Our proposed algorithm based on linguistic features improves the performance relatively by 69.5% and 9.7% when compared with the basic single-pass algorithm and the best variation algorithm in terms of F1 respectively. <s> BIB002 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Topic Drift Resolution Strategies <s> This paper presents a novel approach for extracting high-quality 〈thread-title, reply〉 pairs as chat knowledge from online discussion forums so as to efficiently support the construction of a chatbot for a certain domain. Given a forum, the high-quality 〈thread-title, reply〉 pairs are extracted using a cascaded framework. First, the replies logically relevant to the thread title of the root message are extracted with an SVM classifier from all the replies, based on correlations such as structure and content. Then, the extracted 〈thread-title, reply〉 pairs are ranked with a ranking SVM based on their content qualities. Finally, the Top-N 〈thread-title, reply〉 pairs are selected as chatbot knowledge. Results from experiments conducted within a movie forum show the proposed approach is effective. <s> BIB003 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Topic Drift Resolution Strategies <s> This paper presents a topical text segmentation method based on intended boundaries detection and compares it to a well known default boundaries detection method, c99. We compared the two methods by running them on two different corpora of French texts and results are evaluated by two different methods: one using a modified classic measure, the FScore, the other based on a manual evaluation one the Internet. Our results showed that algorithms that are close when automatically evaluated can be quite far when manually evaluated. <s> BIB004 </s> A Survey Of Challenges And Resolutions Of Mining Question-Answer Pairs From Internet Forum <s> Topic Drift Resolution Strategies <s> We propose a method for annotating post-to-post discourse structure in online user forum data, in the hopes of improving troubleshooting-oriented information access. We introduce the tasks of: (1) post classification, based on a novel dialogue act tag set; and (2) link classification. We also introduce three feature sets (structural features, post context features and semantic features) and experiment with three discriminative learners (maximum entropy, SVM-HMM and CRF). We achieve above-baseline results for both dialogue act and link classification, with interesting divergences in which feature sets perform well over the two sub-tasks, and go on to perform preliminary investigation of the interaction between post tagging and linking. <s> BIB005
|
The usage of term frequency (TF-IDF) and text similarity methods is a very common approach for extracting topic of discussion BIB002 . Quotation within post is often being used to establish context coherence. It indicates the relevance between a reply and the root message if root message is quoted. Drift resolution is implemented in BIB003 using two quotation features: a reply quoting root message and a reply quoting other replies. A reply quoting root message indicates that the reply is relevant to the message. In contrast, a reply quoting other replies may not be relevant to the root message hence it can be considered as topic drift. A blended quoting technique that utilizes some special features offered from the structure of web forums is proposed by BIB001 to cluster the posts of a discussion with the same topic. In their work, an algorithm that uses temporal information such as time and date of posts, the post authors etc. is implemented to create posting chains that uses topic similarity algorithm augmented with the utilization of the quoting system. An exciting method to track topic drifting in a discussion is proposed by BIB004 . They use lexical similarity and thematic distance to identify topic boundaries in a discussion and fragmented it into topic related clusters. An algorithm proposed by that isolates parts of a discussion in order to extracts the topics using just these parts and not the entire thread is good approach to tackle problem of topic drift in forums. Utilization of term weights and domain technical words will probably enhance performance. Some other popular approaches are the use of dialogue act tagging (DAT) and discourse disentanglement. Dialogue act tagging helps in capturing the purpose of a given utterance in relation to an encompassing discourse. Discourse disentanglement is being implemented to automatically identify coherent sub-discourses in a single thread. The two concepts are implemented in BIB005 to establish post-to-post relationship. Three categories of features, namely, structural features, post context features and semantic features were considered in the work. The use of topic modelling such as Latent Dirichlet Allocation may be necessary for long threads that contain tens of posts.
|
A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Introduction <s> Cognitive radio is being intensively researched as the enabling technology for secondary access to the so-called TV White Spaces (TVWS), large portions of spectrum in theUHF/VHF bands which become available on a geographical basis after digital switchover. Both in the US, and more recently, in the UK the regulators have given conditional endorsement to this new mode of access. This paper reviews the state-of-the-art in technology, regulation and standardization of cognitive access to TVWS. It examines the spectrum opportunity and commercial use cases associated with this form of secondary access. <s> BIB001 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Introduction <s> 700MHz band attracted many researchers and stakeholders for mobile communications by providing a rare opportunity to have cost effective wireless solutions due to its excellent propagation characteristics compared to GSM 1800 MHz, 2.1 GHz or 2.5 GHz bands for 3G/BWA. In India, 698–806 MHz more specific 700MHz band mainly used by TV broadcast services. We discuss the scope and nature of opportunities for white space created by Digital Dividend (700 MHz band) in India especially to rural India by providing wireless broadband for the applications like e-education, e-agriculture, e-animal husbandry and e-health which would help in decreasing primary school drop-out rate, in decreasing farmer suicides rate and in decreasing mortality rate. Further use cases for the exploitation of TV White Space suitable for rural India are discussed based on user's and BS geo-location and user's mobility; which is followed by an overview of recent regulatory activities. <s> BIB002 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Introduction <s> TV White Spaces constitutes the major portion of the VHF and UHF TV and which is geographically unused after digital switchover. The most important regulatory trend in the context of Dynamic Spectrum Access (DSA) is the Cognitive access of TV white Spaces. Through spectrum measurement campaign we have estimated the spectrum utilization of TV band in Pune, India. We have designed the measurement set up and methodology for the measurement campaign. Our spectrum occupancy analysis provides the realistic view on the spectrum opportunities in India for (i) spectrum refarming of TV band; (ii) Cognitive Radio operation in TV band. Also we have stressed on the need of quantitative analysis of TVWSs availability and compatibility studies for protection of incumbent services for CR access of TVWSs in India. Also this paper reviews the state-of-the-art in standardization of cognitive access to TVWS. <s> BIB003
|
With the rapid development of technology, the need for access to wireless Internet has become a daily necessity. This has created severe congestion in the frequency spectrum, especially in urban areas where the number of users is consistently high. This exponential increase in broadband traffic has underscored the need for a more efficient and opportunistic use of the available spectrum. Researchers have highlighted the underutilization of licensed portions of the spectrum as a potential opportunity in addressing the spectrum congestion problem. The use of already licensed portions of the spectrum would be enabled by cognitive radios, which behave as secondary users and use the spectrum whenever the primary users, i.e., the license owners, are not using it. A cognitive radio (CR) is a radio that can change its transmission parameters based on interaction with the environment in which it operates . The use of such radios has been approved both by US and UK regulatory bodies, in 2009 and 2012 respectively BIB001 . The move was motivated by the digital transition in TV broadcasting, which made large swathes of TV spectrum accessible for opportunistic use. This portion of the spectrum is referred to as TV White Space (TVWS) and its capacity is quite high. According to Ofcom research, there is more than 150 MHz of interleaved spectrum in over 50% of locations in UK and 100 MHZ of interleaved spectrum in 90% of locations . However, the availability of TVWS spectrum varies from country to country and depends largely on the channels chosen for TV broadcasting. Most available (unused or vacant) channels can be found in less densely populated areas, such as in developing countries or rural areas BIB002 BIB003 . Frequency bands corresponding to TVWS spectrum are: VHF 30-300 MHz and UHF 300-1000 MHz except for the channels reserved for emergency transmissions. In Europe a challenging aspect of TVWS use is that TV spectrum is not only occupied by fixed TV broadcasting signals but also by licensed Programme Making Special Event (PMSE) devices, e.g., wireless microphones used in small events, concerts or security agencies. PMSE can operate in licensed or unlicensed basis. The detection of such equipment is the subject of research project [6] . Furthermore their protection should be guaranteed based on legislative regulations [7] .
|
A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Wireless Communications and Mobile Computing <s> Wireless indoor positioning systems have become very popular in recent years. These systems have been successfully used in many applications such as asset tracking and inventory management. This paper provides an overview of the existing wireless indoor positioning solutions and attempts to classify different techniques and systems. Three typical location estimation schemes of triangulation, scene analysis, and proximity are analyzed. We also discuss location fingerprinting in detail since it is used in most current system or solutions. We then examine a set of properties by which location systems are evaluated, and apply this evaluation method to survey a number of existing systems. Comprehensive performance comparisons including accuracy, precision, complexity, scalability, robustness, and cost are presented. <s> BIB001 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Wireless Communications and Mobile Computing <s> Cognitive radio is being intensively researched as the enabling technology for license-exempt access to the so-called TV White Spaces (TVWS), large portions of spectrum in the UHF/VHF bands which become available on a geographical basis after digital switchover. Both in the US, and more recently, in the UK the regulators have given conditional endorsement to this new mode of access. This paper reviews the state-of-the-art in technology, regulation, and standardisation of cognitive access to TVWS. It examines the spectrum opportunity and commercial use cases associated with this form of secondary access. <s> BIB002 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Wireless Communications and Mobile Computing <s> The spectrum sensing problem has gained new aspects with cognitive radio and opportunistic spectrum access concepts. It is one of the most challenging issues in cognitive radio systems. In this paper, a survey of spectrum sensing methodologies for cognitive radio is presented. Various aspects of spectrum sensing problem are studied from a cognitive radio perspective and multi-dimensional spectrum sensing concept is introduced. Challenges associated with spectrum sensing are given and enabling spectrum sensing methods are reviewed. The paper explains the cooperative sensing concept and its various forms. External sensing algorithms and other alternative sensing methods are discussed. Furthermore, statistical modeling of network traffic and utilization of these models for prediction of primary user behavior is studied. Finally, sensing features of some current wireless standards are given. <s> BIB003 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Wireless Communications and Mobile Computing <s> The FCC recently issued the regulatory rules for cognitive radio use of the TV white space spectrum. These new rules provide an opportunity but they also introduce a number of technical challenges. The challenges require development of cognitive radio technologies like spectrum sensing as well as new wireless PHY and MAC layer designs. These challenges include spectrum sensing of both TV signals and wireless microphone signals, frequency agile operation, geo-location, stringent spectral mask requirements, and of course the ability to provide reliable service in unlicensed and dynamically changing spectrum. After describing these various challenges we will describe some of the possible methods for meeting these challenges. <s> BIB004 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Wireless Communications and Mobile Computing <s> We investigate the efficiency of dynamic frequency selection (DFS) in mitigating interference among neighboring low-power cognitive wireless portable networks operating in the TV white space. We derive an interference model to predict the range and level of interference generated in the TV bands by portable low-height antenna cognitive wireless access points in suburban and urban areas. Based on the aforementioned model, we provide an analysis of the spectral availability for either the scenarios where DFS coexistence is employed or not. The steps of our analysis are introduced in a tutorial fashion, and a coexistence case study of TVWS enabled low-power cognitive wireless portable APs in Japan is presented. Our analysis demonstrates the intrinsic relationship SA holds with the TVWS channel set as well as statistical information (e.g., household density of wards and cities, Internet penetration, and white space radio AP market penetration). <s> BIB005 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Wireless Communications and Mobile Computing <s> Radio spectrum is a necessary barrier for nourishing of economic activities through provision of wireless services. The radio spectrum suitable for the propagation of wireless signals is a limited resource and hence requires optimal allocation as collectively dictated by regulatory, technical and market domains. The current global move to switch from analogue to digital TV has opened up an opportunity for the reallocation of this valuable resource. In one way, spectrum bands once used for analogue TV broadcasting will be completely cleared, leaving a space for deploying new licensed wireless services, and in another way, digital television technology geographically interleaves spectrum bands to avoid interference between neighboring stations-leaving a space for deploying new unlicensed wireless services. The focus of the paper is to assess the availability of geographically interleaved spectrum, also known as television spectrum white spaces (TVWS) and proposing the wireless network scenarios for rural broadband connectivity. <s> BIB006 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Wireless Communications and Mobile Computing <s> TV White Spaces technology is a means of allowing wireless devices to opportunistically use locally-available TV channels (TV White Spaces), enabled by a geolocation database. The geolocation database informs the device of which channels can be used at a given location, and in the UK/EU case, which transmission powers (EIRPs) can be used on each channel based on the technical characteristics of the device, given an assumed interference limit and protection margin at the edge of the primary service coverage area(s). The UK regulator, Ofcom, has initiated a large-scale Pilot of TV White Spaces technology and devices. The ICT-ACROPOLIS Network of Excellence, teaming up with the ICT-SOLDER project and others, is running an extensive series of trials under this effort. The purpose of these trials is to test a number of aspects of white space technology, including the white space device and geolocation database interactions, the validity of the channel availability/powers calculations by the database and associated interference effects on primary services., and the performances of the white spaces devices, among others. An additional key purpose is to undertake a number of research investigations such as into aggregation of TV White Space resources with conventional (licensed/unlicensed) resources, secondary coexistence issues and means to mitigate such issues, and primary coexistence issues under challenging deployment geometries, among others. This paper describes our trials, their intentions and characteristics, objectives, and some early observations. <s> BIB007
|
While it is not expected that TVWS based broadband access will completely substitute the WiFi technology, such bands may be used to augment spectrum resources when needed BIB005 . The TVWS are convenient for two main reasons: their superior propagation characteristics for wireless communication which enable larger coverage and the minimal infrastructure requirements which makes them ideal for rural and undeveloped areas which are difficult to reach or connect through optical fiber. This is especially convenient for developing countries such as those in Western Balkans, where broadband penetration rates are increasing rapidly as comparing ITU reports on the state of broadband show . In Albania alone, the number of active mobile broadband subscribers has shot up from 8.8 to 52.6 per 100 inhabitants in the last five years. Furthermore, providing fiber optic connection may not be cost-efficient for service providers, due to the high cost, thus access through wireless broadband networks through TVWS could be preferred BIB006 . This being said, the successful implementation of this technology largely depends on the ability to effectively manage and avoid the possible interference caused to the primary users. To enable this, the cognitive radios will have to continuously sense the channel to detect primary user transmissions and ensure that primary users are protected at all times. In case that secondary user is using the spectrum and primary user starts operating, than secondary user has to immediately vacate the channel in order to avoid causing interference to primary user. To ensure this, the UK regulator, Ofcom and Federal Communication Commission (FCC) in the United States, have proposed three methods to be used by secondary users: (i) beacons, (ii) sensing and (iii) geolocation with database. When beacons are used as a controlling method, secondary users will only start transmitting if they have already received a beacon signal implying the vacant channel. The drawback of this method is that it requires the infrastructure of beacons to be implemented and maintained BIB002 . With sensing, the secondary users will sense the spectrum and try to detect the presence of primary users based on the amount of energy received. Secondary users may operate when they do not detect any primary signals. However, in the case of cognitive devices, this is not a straightforward task as it involves detecting other signal characteristics such as modulation and bandwidth, thus increasing device complexity and cost BIB003 . The third technique uses geolocation and databases. Secondary users have to send a query to a database that contains information regarding the spectrum usage in the vicinity during the specific time period. The database will respond with the list of available frequencies including all transmission parameters that need to be followed for secondary transmission to start. This implies that secondary users must have geolocation capability, while the database must be kept updated at all times, which incurs additional overhead. An additional challenge on using geolocation and database access is when secondary users are indoors where GPS connectivity may not be available due to the signal disruption from buildings, walls, etc., BIB004 . Although GPS is one of the most widely used localization techniques, alternative techniques for outdoor and indoor localization using cellular network and wireless local network signals are also possible BIB001 . Techniques involving both spectrum sensing and information coming from geolocation databases have also been proposed and tested . Ofcom, has performed a series of trials, as part of the TVWS pilot project, to test a number of aspects of white space technology, including the white space device and geolocation database interactions, the validity of the channel availability/powers calculations by the database and associated interference aspects on primary services BIB007 . Following the decision by US and UK to allow opportunistic use of TVWS several standards were developed to facilitate its practical implementation. The first international standard to be developed for TVWS cognitive devices was ECMA-392, introduced in 2009. But with the introduction of the idea of WiFi communications in TVWS, a task group to develop a new IEEE 802.11af standard was developed in the same year. The IEEE 802.11af standard was approved in February 2014. In July 2010, the IEEE 802.16h standard was published for WiMAX. Following this, in July 2011, a new standard for cognitive radios that will be used in rural areas and enable spectrum sharing, IEEE 802.22, was introduced . 802.22 wireless access technology is envisioned for rural communications because the coverage is large up to 100 km and there is no need for fixed spectrum which makes it very profitable for operators. Because cognitive radios might be used for different purposes and may operate with different technologies, coexistence and self-coexistence problems arise. We use the term coexistence to describe the situation that arises when primary users and cognitive radio devices (secondary users) exists/operate in the same time and location, whereas self-coexistence describes the cohabitation, in time and space, in the same frequency, of several cognitive radio users or networks which can be of the same or different type. Challenges surface because the different networks tend to selfishly occupy the spectrum to satisfy their own needs without any regards for other network cohabiting in the same spectrum, and the problem is further exacerbated when the various systems using the same spectrum have different operating parameters (transmit power, bandwidth, MAC/PHY layer, etc.).
|
A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> The Cognitive Radio (CR) Network <s> Seventy-five percent of India's population is in rural villages, yet almost 90 percent of the country's phones are in urban sites. The authors propose a fixed cellular radio system, combined with the existing mobile network, as a cost effective way to extend telecommunications services to India's rural areas. > <s> BIB001 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> The Cognitive Radio (CR) Network <s> The wireless LAN technology known as WiFi (or wireless fidelity) which is the standard developed by the IEEE 802 Committee is now introduced by internet service providers or by the network operators in the metropolitan hectic areas of developed countries at so-called hotspots such as airports, hotels, cafes, railway stations, etc and facilitates to provide easy and low-cost, high-speed internet connections for the PC, PDA, mobile IP phone users. Implementation of these technologies for various applications including e-health and tele-education, etc, for rural telecommunication development in Japan and by ITU will be described in this paper. The global survey and analysis on the telecommunications environment and the needs of rural communities of the developing countries conducted by ITU will be briefly introduced. The future perspective for the development of rural communications including the applications of e-health care will also be discussed. <s> BIB002 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> The Cognitive Radio (CR) Network <s> DakNet provides extraordinarily low-cost digital communication, letting remote villages leapfrog past the expense of traditional connectivity solutions and begin development of a full-coverage broadband wireless infrastructure. What is the basis for a progressive, market-driven migration from e-governance to universal broadband connectivity that local users will pay for? DakNet, an ad hoc network that uses wireless technology to provide asynchronous digital connectivity, is evidence that the marriage of wireless and asynchronous service may indeed be the beginning of a road to universal broadband connectivity. DakNet has been successfully deployed in remote parts of both India and Cambodia at a cost two orders of magnitude less than that of traditional landline solutions. <s> BIB003 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> The Cognitive Radio (CR) Network <s> Employing wireless technologies to provide connectivity for rural areas is an active topic in the academic and industrial communities. In this article we begin by discussing the challenges of rural communications and reviewing existing wireless technologies that have been proposed or implemented for this market. We then focus on an emerging technology, cognitive radio, that promises to be a viable solution for rural communications. The most notable candidate for rural cognitive radio technology is the IEEE 802.22 standard that is currently being developed and is based on time division duplexing, orthogonal frequency division multiple access, and opportunistic use of the VHF/UHF TV bands. We address two important issues that can affect the success of IEEE 802.22 technology in rural deployments, namely, to: 1) Provide suitable service models 2)Overcome the problem of long TDD turnaround time in large rural cells For the first issue, we introduce a service model that combines TV broadcasting and data services to facilitate service adoption. For the second issue, we propose an adaptive TDD approach that effectively eliminates the requirement for long TDD turn-around time and thus, increases the efficiency of large-coverage rural networks. <s> BIB004 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> The Cognitive Radio (CR) Network <s> Cognitive radio is being intensively researched as the enabling technology for license-exempt access to the so-called TV White Spaces (TVWS), large portions of spectrum in the UHF/VHF bands which become available on a geographical basis after digital switchover. Both in the US, and more recently, in the UK the regulators have given conditional endorsement to this new mode of access. This paper reviews the state-of-the-art in technology, regulation, and standardisation of cognitive access to TVWS. It examines the spectrum opportunity and commercial use cases associated with this form of secondary access. <s> BIB005 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> The Cognitive Radio (CR) Network <s> Cognitive radio is being intensively researched for opportunistic access to the so-called TV White Spaces (TVWS): large portions of the VHF/UHF TV bands which become available on a geographical basis after the digital switchover. Using accurate digital TV (DTV) coverage maps together with a database of DTV transmitters, we develop a methodology for identifying TVWS frequencies at any given location in the United Kingdom. We use our methodology to investigate variations in TVWS as a function of the location and transmit power of cognitive radios, and examine how constraints on adjacent channel interference imposed by regulators may affect the results. Our analysis provides a realistic view on the spectrum opportunity associated with cognitive devices, and presents the first quantitative study of the availability and frequency composition of TWVS outside the United States. <s> BIB006 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> The Cognitive Radio (CR) Network <s> As Digital Television Broadcasting spreads over the world, existing (and more) TV channels can be distributed in less spectrum in the spectrum traditionally allocated to TV broadcasting. This freed spectrum is also referred to as the "Digital Dividend" and its use has been debated around the world. In addition, there is also a debate about the potential use of the "white space" within the TV-bands. This is due to the sparse frequency planning with large interference margins, which is typical in wide area broadcasting. Various technical approaches using Opportunistic Spectrum Access (OSA) have been proposed for unlicensed "white space" access to the TV bands. Most of previous studies have focused on spectrum sensing, i.e. detecting "free channels", where secondary users, utilizing White Space Devices (WSD) could avoid causing harmful interference to the TV receivers. However, interference caused by WSD is not only limited to co-channel interference. In particular, in short-range scenarios, the adjacent channel interference is an equally severe problem. Assessing the feasibility of WSDs in short-range indoor scenarios, taking more interference mechanisms into account is the objective of this paper. An Indoor home scenario with Cable, Rooftop antenna and Set-top antenna reception of DVB-T, has been analyzed. The spectrum reuse opportunities for WSDs have been determined, using the number of channels where it is possible to transmit without causing harmful interference to DVB-T receivers as performance measure. Simulation results show that the number of available channels for indoor unlicensed white space transmission appears to be significant in most of the studied scenarios. <s> BIB007 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> The Cognitive Radio (CR) Network <s> Cognitive radio is being intensively researched as the enabling technology for secondary access to the so-called TV White Spaces (TVWS), large portions of spectrum in theUHF/VHF bands which become available on a geographical basis after digital switchover. Both in the US, and more recently, in the UK the regulators have given conditional endorsement to this new mode of access. This paper reviews the state-of-the-art in technology, regulation and standardization of cognitive access to TVWS. It examines the spectrum opportunity and commercial use cases associated with this form of secondary access. <s> BIB008 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> The Cognitive Radio (CR) Network <s> In order to improve utilization of TV spectrum, regulatory bodies around the world have been developing rules to allow operation by unlicensed users in these bands provided that interference to incumbent broadcasters is avoided. Thus, new services may opportunistically use temporarily unoccupied TV channels, known as television white space. This has motivated several standardization efforts such as IEEE 802.22, 802.11af, 802.19 TG1, and ECMA 392 to further cognitive networking. Specifically, multiple collocated secondary networks are expected to use TVWS, each with distinct requirements (bandwidth, transmission power, different system architectures, and device types) that must all comply with regulatory requirements to protect incumbents. Heterogeneous coexistence in the TVWS is thus expected to be an important research challenge. This article introduces the current regulatory scenario, emerging standards for cognitive wireless networks targeting the TVWS, and discusses possible coexistence scenarios and associated challenges. Furthermore, the article casts an eye on future considerations for these upcoming standards in support of spectrum sharing opportunities as a function of network architecture evolution. <s> BIB009 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> The Cognitive Radio (CR) Network <s> In next generation networks, voice, data, and multimedia services will be converged onto a single network platform with increasing complexity and heterogeneity of underlying wireless and optical networking systems. These services should be delivered in the most cost- and resource-efficient manner with ensured user satisfaction. To this end, service providers are now switching the focus from network Quality of Service (QoS) to user Quality of Experience (QoE), which describes the overall performance of a network from the user perspective. High network QoS can, in many cases, result in high QoE, but it cannot assure high QoE. Optimizing end-to-end QoE must consider other contributing factors of QoE such as the application-level QoS, the capability of terminal equipment and customer premises networks, and subjective user factors. This article discusses challenges and a possible solution for optimizing end-to-end QoE in Next Generation Networks. <s> BIB010 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> The Cognitive Radio (CR) Network <s> TV white space refers to TV channels that are not used by any licensed services at a particular location and at a particular time. To exploit this unused TVWS spectrum for improved spectrum efficiency, regulatory agencies have begun developing regulations to permit its use this TVWS by unlicensed wireless devices as long as they do not interfere with any licensed services. In the future many heterogeneous, and independently operated, wireless networks may utilize the TVWS. Coexistence between these networks is essential in order to provide a high level of QoS to end users. Consequently, the IEEE 802 LAN/MAN standards committee has approved the P802.19.1 standardization project to specify radio-technology-independent methods for coexistence among dissimilar or independently operated wireless devices and networks. In this article we provide a detailed overview of the regulatory status of TVWS in the United States and Europe, analyze the coexistence problem in TVWS, and summarize existing coexisting mechanisms to improve coexistence in TVWS. The main focus of the article is the IEEE P802.19.1 standardization project, including its requirements and system design, and the major technical challenges ahead. <s> BIB011
|
The cognitive radio network is composed of secondary devices that communicate among themselves; however the configuration and organization of the network will depend on the technology and standard applied. In general, cognitive devices for use in TVWS are divided into four groups: fixed devices, Mode I personal/portable devices, Mode II personal/portable devices and sensing only devices, as defined by FCC specifications and standards and summarized in Table 1 . Fixed devices can transmit up to 4W EIRP (Effective Isotropic Radiated Power). Due to the high transmission power level these devices are not allowed to operate on adjacent channels of the TV channels that are in use and they must have access to database and geolocation capability. BIB005 . The TVWS database is a central database, managed by reliable authority that contains information on all primary user's operation characteristics, such as: transmission power, allocated channels and usage patterns, location, etc. Secondary networks/users must send a query to this database to ask for available channels in their location. It can be noted that location is usually determined based on GPS connection, which may be available for certain types of secondary devices. Therefore, it is most likely that fixed devices will be used in rural areas where the conditions will change slowly, whereas portable devices will be more appropriate for use in metropolitan areas BIB009 . Sensing only devices are devices that independently sense the radio spectrum in order to detect primary users and avoid harmful interference with them. Their maximum transmit power is 50mW. They are able to sense digital TV, analog TV and wireless microphone transmitted signals at -114dBm. Sensing is performed periodically to determine the availability of a channel, and afterwards, when the channel is allocated, sensing is performed repeatedly over a longer period. Once any kind of signal is detected, within the spectrum they are operating in, these devices stop transmitting within 2s BIB011 . The cognitive radio devices (CRs) are allowed to operate in most of the channels except those that are reserved for public safety or commercial use. Related work shows that the number of available channels in indoor cases is also significant BIB006 BIB007 . It is envisioned that CR networks will be used for the following applications BIB008 : (i) Wide area broadband provision to rural areas (ii) Future home networks and smart grids (iii) Cellular communications (iv) Public Safety As mentioned earlier, CR technology is being viewed as an effective solution for the provision of broadband services in rural areas. Based on a report published by the United Nations, more than 3 billion people live in rural areas . Also in some developing countries such as China and India, around 70 percent of the population live in rural areas. Providing communication services to communities that live in these areas is an important factor towards the betterment of their social and educational development . However, implementation issues present a big challenge considering the high cost versus the low demand. Due to this, different operators are leaning towards low cost solutions. Compared to the cost for wired networks, wireless technologies are more cost-efficient, and several approaches have already been proposed BIB001 BIB002 BIB003 . So far none of these initial proposals has produced feasible solutions to offering services in these areas considering the low demand and high cost. The implementation of CR networks, emerged as an optimal solution which takes advantage of better spectrum usage while coexisting with primary users BIB004 . It is expected that CR will also find an application in future smart grid systems BIB010 .
|
A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Coexistence Challenges for CR Networks in TVWS <s> Today's wireless networks are characterized by a fixed spectrum assignment policy. However, a large portion of the assigned spectrum is used sporadically and geographical variations in the utilization of assigned spectrum ranges from 15% to 85% with a high variance in time. The limited available spectrum and the inefficiency in the spectrum usage necessitate a new communication paradigm to exploit the existing wireless spectrum opportunistically. This new networking paradigm is referred to as NeXt Generation (xG) Networks as well as Dynamic Spectrum Access (DSA) and cognitive radio networks. The term xG networks is used throughout the paper. The novel functionalities and current research challenges of the xG networks are explained in detail. More specifically, a brief overview of the cognitive radio technology is provided and the xG network architecture is introduced. Moreover, the xG network functions such as spectrum management, spectrum mobility and spectrum sharing are explained in detail. The influence of these functions on the performance of the upper layer protocols such as routing and transport are investigated and open research issues in these areas are also outlined. Finally, the cross-layer design challenges in xG networks are discussed. <s> BIB001 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Coexistence Challenges for CR Networks in TVWS <s> A Markov chain analysis for spectrum access in licensed bands for cognitive radios is presented and forced termination probability, blocking probability and traffic throughput are derived. In addition, a channel reservation scheme for cognitive radio spectrum handoff is proposed. This scheme allows the tradeoff between forced termination and blocking according to QoS requirements. Numerical results show that the proposed scheme can greatly reduce forced termination probability at a slight increase in blocking probability <s> BIB002 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Coexistence Challenges for CR Networks in TVWS <s> With the explosive growth of wireless multimedia applications over the wireless Internet in recent years, the demand for radio spectral resources has increased significantly. In order to meet the quality of service, delay, and large bandwidth requirements, various techniques such as source and channel coding, distributed streaming, multicast etc. have been considered. In this paper, we propose a technique for distributed multimedia transmission over the secondary user network, which makes use of opportunistic spectrum access with the help of cognitive radios. We use digital fountain codes to distribute the multimedia content over unused spectrum and also to compensate for the loss incurred due to primary user interference. Primary user traffic is modelled as a Poisson process. We develop the techniques to select appropriate channels and study the trade-offs between link reliability, spectral efficiency and coding overhead. Simulation results are presented for the secondary spectrum access model. <s> BIB003 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Coexistence Challenges for CR Networks in TVWS <s> The aim of this thesis is to research in the use of emerging TV white space communications, implementing a geo-location database system. For that, some research and theoretical studies related to cognitive radio and TV white space communications will be done first, focusing on current activities, standarization processes, commercial approaches and related projects. Once the background and the present TV white space communications status is analyzed, a geolocation database system will be designed and developed to prove the potential of this technology. The operation of the database system will be demonstrated through a web interface. In this way, an open and publicly accessible geo-location database system implementation and structure will be created (note that even if several database system creation initiatives are taking place, most of them are private). However, due to the lack of official regulatories, established standards, and actual transmission data (data from TV broadcasters, wireless microphones etc.), only an initial TV white space database system demo will be implemented to model the operation of the same. It will be possible to access and query this database system through a simple web interface for the Oslo area. After analyzing the results of the implementation and looking to other TV white space initiatives, some considerations for future work will be concluded. <s> BIB004 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Coexistence Challenges for CR Networks in TVWS <s> With the rapid deployment of new wireless devices and applications, the last decade has witnessed a growing demand for wireless radio spectrum. However, the fixed spectrum assignment policy becomes a bottleneck for more efficient spectrum utilization, under which a great portion of the licensed spectrum is severely under-utilized. The inefficient usage of the limited spectrum resources urges the spectrum regulatory bodies to review their policy and start to seek for innovative communication technology that can exploit the wireless spectrum in a more intelligent and flexible way. The concept of cognitive radio is proposed to address the issue of spectrum efficiency and has been receiving an increasing attention in recent years, since it equips wireless users the capability to optimally adapt their operating parameters according to the interactions with the surrounding radio environment. There have been many significant developments in the past few years on cognitive radios. This paper surveys recent advances in research related to cognitive radios. The fundamentals of cognitive radio technology, architecture of a cognitive radio network and its applications are first introduced. The existing works in spectrum sensing are reviewed, and important issues in dynamic spectrum allocation and sharing are investigated in detail. <s> BIB005 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Coexistence Challenges for CR Networks in TVWS <s> In order to improve utilization of TV spectrum, regulatory bodies around the world have been developing rules to allow operation by unlicensed users in these bands provided that interference to incumbent broadcasters is avoided. Thus, new services may opportunistically use temporarily unoccupied TV channels, known as television white space. This has motivated several standardization efforts such as IEEE 802.22, 802.11af, 802.19 TG1, and ECMA 392 to further cognitive networking. Specifically, multiple collocated secondary networks are expected to use TVWS, each with distinct requirements (bandwidth, transmission power, different system architectures, and device types) that must all comply with regulatory requirements to protect incumbents. Heterogeneous coexistence in the TVWS is thus expected to be an important research challenge. This article introduces the current regulatory scenario, emerging standards for cognitive wireless networks targeting the TVWS, and discusses possible coexistence scenarios and associated challenges. Furthermore, the article casts an eye on future considerations for these upcoming standards in support of spectrum sharing opportunities as a function of network architecture evolution. <s> BIB006 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Coexistence Challenges for CR Networks in TVWS <s> This paper concerns the analysis of adjacent channel interference of the 3GPP Long Term Evolution (LTE) E-UTRA (Evolved Universal Terrestrial Radio Access) mobile systems into Digital Video Broadcasting - Terrestrial (DVB-T) systems. The simulated performance of both systems allows us to define recommendations for minimizing the interference effects. The subjective quality of the received TV signal has been evaluated experimentally in terms of picture failure (PF). We also investigated the importance of the selection of suitable Spectral Emission Masks (SEMs) of the LTE downlink transmission. Results show that, by using the ECC 148 SEM, both the protection ratio and the minimum distance between LTE towers and DVB-T receivers can be significantly decreased. <s> BIB007 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Coexistence Challenges for CR Networks in TVWS <s> In this paper, we investigate the coexistence problem between the 802.22 and the 802.11af systems in the TV White Spaces (TVWS). We focus on the design of a co-channel coexistence scheme for the 802.22 customer-premises equipments (CPE) and the 802.11af systems. 802.22 and 802.11af are two typical standards envisioned to be widely adopted in the future. However, these two standards are heterogeneous in both power level and PHY/MAC design, making their coexistence challenging. To avoid mutual interference between the two systems, existing solutions have to allocate different channels for the two networks. Due to the city-wide coverage of the 802.22 base station (BS), the spectrum utilization is compromised with existing schemes. In this paper, we first identify the challenges to enable the co-channel coexistence of the 802.22 and the 802.11af systems and then propose a busy-tone based framework. We design a busy-tone for the 802.22 CPEs to exclude the hidden 802.11af terminals. We also show that it is possible for the 802.11af systems to identify the exposed 802.22 CPE transmitters and conduct successful transmissions under interference. We show through extensive simulations that the spectrum utilization can be increased with the proposed co-channel coexistence scheme. <s> BIB008 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Coexistence Challenges for CR Networks in TVWS <s> With the spectrum liberation obtained by the deployment of digital terrestrial television and the analog TV switch-off, new bands are being assigned to IMT LTE. In the first cellular deployments in the digital dividend at the 800 MHz band, problems emerged due to the interference cellular networks can cause to DTT signals. Possible solutions imply either an inefficient use of the spectrum (increasing the guard band and reducing the number of DTT channels) or a high cost (using anti-LTE filters for DTT receivers). The new spectrum allocated to mobile communications is the 700 MHz band, also known as the second digital dividend. In this new IMT band, the LTE uplink is placed in the lower part of the band. Hence, the ITU-R invited several studies to be performed and reported the results to WRC-15. In this article, we analyze the coexistence problem in the 700 MHz band and evaluate the interference of LTE signals to DTT services. Several coexistence scenarios have been considered, and laboratory tests have been performed to measure interference protection ratios. <s> BIB009 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Coexistence Challenges for CR Networks in TVWS <s> With the introduction of digital terrestrial television (DTT) and the analogue television switch-off, terrestrial broadcast spectrum in the UHF band is being released for mobile communications, in particular for fourth generation (4G) long term evolution (LTE) mobile services. This spectrum is known as digital dividend. An impending problem when deploying 4G LTE mobile networks in the digital dividend bands is that interferences may appear in the adjacent radio frequency channels used for DTT. In this paper, we analyze the adjacent coexistence of DTT and 4G LTE networks in the digital dividend bands at 700 MHz and 800 MHz. A generic framework is adopted such that results can be easily extrapolated to different scenarios and bands. Results are presented as a function of the guard band between technologies, for both LTE uplink and downlink adjacent to the DTT signals, and for fixed outdoor and portable indoor DTT reception. Also, the effect of using anti-LTE filters is studied. <s> BIB010
|
Because existing wireless networks are generally designed to work with fixed frequency allocation, coexistence challenges between wireless networks arise when switching to a cognitive radio environment. In addition, because the available spectrum changes rapidly and there are many different QoS requirements for different applications, CR networks have to handle many additional challenges: interference avoidance with primary users, optimal spectrum band selection for QoS guarantee, seamless communications regardless of the appearance of primary users BIB004 , to name a few. To tackle these challenges, a coexistence decision mechanism (CDM) of a CR network must have these four functionalities: spectrum sensing, spectrum decision, spectrum sharing strategy and spectrum mobility, described in detail in BIB005 BIB001 BIB002 BIB003 . The cycle of cognitive radio functionalities is shown in Figure 1 . To overcome the time delay introduced while performing this complete cycle, solutions such as spectrum prediction for spectrum sensing was proposed . Coexistence issues may arise between different services sharing adjacent portion of the spectrum, such as Digital Terrestrial Television (DTE) and cellular networks operating in the TVWS, as highlighted in BIB010 . In particular the potential interference caused by the LTE network to the DTT signal in the 700 MHz was studied in BIB009 . Both these papers conclude that the interference caused by the LTE network can be significant proposing the use of anti-LTE filters to improve the protection of DTT signals, and a case-by-case study of coexistence issues for DTT network planning. A similar study BIB007 , proposes the application of suitable spectral emission masks on the LTE downlink transmission to mitigate the problem. In particular, coexistence between IEEE 802.22 and 802.11af is challenging due to the differences in operating powers and sensitivity thresholds. The IEEE 802.22 system transmission power is 4W and sensitivity threshold -97 dBm whereas IEEE 802.11af has power transmission of 100 mW and sensitivity threshold -64 dBm . The main differences between the two different IEEE standards are shown in Table 2 . The challenges to enable coexistence arise mainly because of two main reasons: (i) the reception threshold of 802.11af is higher than that of 802.22 receivers resulting on misdetection of 802.22 transmitter from 802.11af transmitter (the hidden terminal problem), and (ii) the transmission power of 802.22 is higher than 802.11af so 802.11af operation can be easily blocked if it is in proximity to the 802.22 transmitter. In the latter case 802.11af will have very little opportunity to transmit BIB008 . Therefore to enable a fair coexistence between heterogeneous wireless networks in TVWS, a coexistence mechanism must be implemented, that addresses these three main challenges: spectrum sharing, interference mitigation and spectrum detection BIB006 , as further detailed in Table 3 .
|
A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Spectrum Availability Detection. <s> There are new system implementation challenges involved in the design of cognitive radios, which have both the ability to sense the spectral environment and the flexibility to adapt transmission parameters to maximize system capacity while coexisting with legacy wireless networks. The critical design problem is the need to process multigigahertz wide bandwidth and reliably detect presence of primary users. This places severe requirements on sensitivity, linearity and dynamic range of the circuitry in the RF front-end. To improve radio sensitivity of the sensing function through processing gain we investigated three digital signal processing techniques: matched filtering, energy detection and cyclostationary feature detection. Our analysis shows that cyclostationary feature detection has advantages due to its ability to differentiate modulated signals, interference and noise in low signal to noise ratios. In addition, to further improve the sensing reliability, the advantage of a MAC protocol that exploits cooperation among many cognitive users is investigated. <s> BIB001 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Spectrum Availability Detection. <s> In this paper, we investigate an optimization of threshold level with energy detection to improve the spectrum sensing performance. Determining threshold level to minimize spectrum sensing error both reduces collision probability with primary user and enhances usage level of vacant spectrum, resulting in improving total spectrum efficiency. However, when determining threshold level, spectrum sensing constraint should also be satisfied since it guarantees minimum required protection level of primary user and usage level of vacant spectrum. To minimize spectrum sensing error for given spectrum sensing constraint, we derive an optimal adaptive threshold level by utilizing the spectrum sensing error function and constraint which is given by inequality condition. Simulation results show that the proposed scheme provides better spectrum sensing performance compared to conventional schemes. <s> BIB002 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Spectrum Availability Detection. <s> The cognitive radio literature generally assumes that the functions required for non-cooperative secondary DSA are integrated into a single radio system. It need not be so. In this paper, we model cognitive radio functions as a value chain and explore the implications of different forms of organization of this value chain. We initially explore the consequences of separating the sensing function from other cognitive radio functions. <s> BIB003 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Spectrum Availability Detection. <s> With the increasing spectrum scarcity due to increase in the wireless devices, and limited availability of spectrum for licensed users only, the need for secondary access by unlicensed users is increasing. Cognitive radio turns out to be helping this situation because all that is needed is a technique that could efficiently detect the empty spaces and provide them to the secondary devices without causing any interference to the primary (licensed) users. Spectrum sensing is the foremost function of the cognitive radio which senses the environment for white spaces. Various techniques have been introduced in the spectrum sensing literature and these techniques are still under research. In this paper, we study one of the chiefly used techniques called energy detection spectrum sensing. It is known that when the signals travel in the wireless medium via various channels, they undergo several impairments caused by the different channels like additive white Gaussian noise and Rayleigh fading etc. Here, an attempt is made to assess the energy detection technique over these two wireless channels. <s> BIB004 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Spectrum Availability Detection. <s> The Internet of Things concept has revolutionized the way of using sensors and produced data. The interconnection of sensors to computing systems and to both storage infrastructures and processing facilities in a cloud fashion enables the paradigm of Sensing as a Service (S2aaS). In this paper, we propose a system architecture compliant with the S2aaS model, and detail it for a specific use case, the Spectrum Sensing as a Service (S3aaS). We illustrate the system components, including heterogeneous spectrum sensors, a distributed messaging system, a scheduler, a scalable database with a relevant SQL interface tool, and a user interface tool used to interact with the S3aaS system. Finally, we show the implementation of a proof-of-concept prototype used for assessing its effectiveness in operation. <s> BIB005
|
Spectrum availability detection or spectrum sensing is the process during which secondary networks while sensing the spectrum must identify available TV channels that can be used without causing harmful interference to primary users. Sensing can be performed in three domains: time, frequency and space. Sensing is used also to identify the types of signals that are occupying the spectrum by determining their: carrier frequency, modulation type, bandwidth etc. Spectrum sensing can be performed using several techniques: energy detection, matched filtering, cyclostationary feature-based sensing, radio-identification based sensing and waveform based sensing. However, due to its simple implementation, the energy detection technique is the one that is most commonly deployed. The signal detection with this technique is based on comparing the sensed signal with a defined SINR threshold BIB004 . The detection threshold is an important parameter that needs to be optimized to minimize the errors in detection, and adaptive techniques on setting this threshold have been investigated in BIB002 . Other techniques require a priori knowledge of regarding primary user transmitted signal, which is not always easy to get, and the implementation at the receiver end is a challenge. For example matched filtering technique is only appropriate to be used in the case that the secondary user knows all the information about the primary user transmitted signal. The computational time is very low but on the other hand the power consumption is high BIB001 . Moreover, the spectrum detection phase does not need to be performed in isolation. Indeed with the increasing number of interconnected sensors in the framework of the Internet of Things paradigm, some authors propose to take advantage of the readily available infrastructure to perform BIB003 . A cloudcomputing platform, which enables precisely this, and allows the Sensing-as-a-Service concept to be used in the context of spectrum availability detection is proposed in BIB005 . While detection of the primary users is crucial to enable self-coexistence, secondary networks must also be able to detect other secondary cognitive networks that operate in the same or neighboring channels. Failing to do so will lead to a decrease in network performance due to increased interference. To overcome this issue, one approach is to enable cooperation among secondary networks in order for them to be able to coordinate and synchronize spectrum usage.
|
A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Interference Mitigation and Spectrum Sharing. <s> In the unlicensed spectrum, any device is free to transmit without a government license that implies exclusive access. Such spectrum has significant benefits, but serious challenges must first be overcome. Foremost is the risk of drastic performance degradation and inefficient spectrum utilization, due to a lack of incentive to conserve shared spectrum. Previous work has shown this problem to be a real possibility. This paper demonstrates that the solution lies in proper regulation of access to unlicensed spectrum and its usage. We present a choice of potential solutions that vary in the degree to which they solve the problem, and in their impact on performance. <s> BIB001 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Interference Mitigation and Spectrum Sharing. <s> Under the current system of spectrum allocation, rigid partitioning has resulted in vastly underutilized spectrum bands, even in urban locales. Cognitive radios have been proposed as a way to reuse this underutilized spectrum in an opportunistic manner. To achieve this reuse while guaranteeing non-interference with the primary user, cognitive radios must detect very weak primary signals. However, uncertainties in the noise+interference impose a limit on how low of a primary signal can be robustly detected. ::: In this paper, we show that the presence/absence of possible interference from other opportunistic spectrum users represents a major component of the uncertainty limiting the ability of a cognitive radio network to reclaim a band for its use. Coordination among nearby cognitive radios is required to control this uncertainty. While this coordination can take a form similar to a traditional MAC protocol for data communication, its role is different in that it aims to reduce the uncertainty about interference rather than just reducing the interference itself. ::: We show how the degree of coordination required can vary based on the coherence times and bandwidths involved, as well as the complexity of the detectors themselves. The simplest sensing strategies end up needing the most coordination, while more complex strategies involving adaptive coherent processing and interference prediction can be individually more robust and thereby reduce the need for coordination across different networks. We also show the existence of a coordination radius wall which limits secondary user densities that can be supported irrespective of coordination involved. Furthermore, local cooperation among cognitive radios for collective decision making can reduce the fading margins we need to budget for. This cooperation benefits from increased secondary user densities and hence induces a minima in the power-coordination tradeoff. <s> BIB002 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Interference Mitigation and Spectrum Sharing. <s> One of the reasons for the limitation of bandwidth in current generation wireless networks is the spectrum policy of the Federal Communications Commission (FCC). But, with the spectrum policy reform, open spectrum wireless networks, and spectrum agile radios are set to drive next general wireless networks. In this paper, we investigate continuous-time Markov models for dynamic spectrum access in open spectrum wireless networks. Both queueing and no queueing cases are considered. Analytical results are derived based on the Markov models. A random access protocol is proposed that is shown to achieve airtime fairness. A distributed version of this protocol that uses only local information is also proposed based on homo egualis anthropological model. Inequality aversion by the radio systems to achieve fairness is captured by this model. These protocols are then extended to spectrum agile radios. Extensive simulation results are presented to compare the performances of fixed versus agile radios. <s> BIB003 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Interference Mitigation and Spectrum Sharing. <s> The interference temperature model was proposed by the FCC in 2003 as a way to dynamically manage and allocate spectrum resources. It would allow unlicensed radios to sense their current RF environment and transmit in licensed bands, provided their transmission does not raise the interference temperature for that frequency band over the interference temperature limit. It never received much interest because nobody was sure exactly how to use it or how if it would work. This research focuses on a mathematical analysis of the interference temperature model in an effort to examine the relationships between the capacity achieved by the unlicensed network and the interference caused to the licensed network. We develop a model for the RF environment and determine probability distributions governing interference temperature as a function of various elements in the model. We then determine bounds on the amount of interference caused by implementing such a system. We examine model environments for a wireless WAN and a wireless LAN, each coexisting with a licensed carrier. For each, we quantify both the impact on the licensed signal and also the capacity achieved by our underlay network. By substituting numeric values for RF environments in which the interference temperature model might be applied, we show that achievable capacity is very small, while the impact the licensee can be very large. Based on this, we propose alternate usages for interference temperature and ways to boost capacity. <s> BIB004 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Interference Mitigation and Spectrum Sharing. <s> Cognitive radio has been recently proposed as a promising technology to improve the spectrum utilization efficiency by intelligently sensing and accessing some vacant bands of licensed users. In this paper, we consider the coexistence between a cognitive radio and a licensed user in order to enhance the spectrum efficiency. We develop an approach to allow the cognitive radio to operate in the presence of the licensed user. In order to minimize the interference to the licensed user, the transmit power of the cognitive radio is controlled by using the side information of spectrum sensing. Numerical results will show that the quality of service for the licensed user can be guaranteed in the presence of the cognitive radio by the proposed approach. <s> BIB005 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Interference Mitigation and Spectrum Sharing. <s> In cognitive radio networks, cognitive (unlicensed) users need to continuously monitor spectrum for the presence of primary (licensed) users. In this paper, we illustrate the benefits of cooperation in cognitive radio. We show that by allowing the cognitive users operating in the same band to cooperate we can reduce the detection time and thus increase the overall agility. We first consider a two-user cognitive radio network and show how the inherent asymmetry in the network can be exploited to increase the agility. We show that our cooperation scheme increases the agility of the cognitive users by as much as 35%. We then extend our cooperation scheme to multicarrier networks with two users per carrier and analyze asymptotic agility gain. In Part II of our paper [1], we investigate multiuser single carrier networks. We develop a decentralized cooperation protocol which ensures agility gain for arbitrarily large cognitive network population. <s> BIB006 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Interference Mitigation and Spectrum Sharing. <s> In cognitive radio networks, the secondary users can use the frequency bands when the primary users are not present. Hence secondary users need to constantly sense the presence of the primary users. When the primary users are detected, the secondary users have to vacate that channel. This makes the probability of detection important to the primary users as it indicates their protection level from secondary users. When the secondary users detect the presence of a primary user which is in fact not there, it is referred to as false alarm. The probability of false alarm is important to the secondary users as it determines their usage of an unoccupied channel. Depending on whose interest is of priority, either a targeted probability of detection or false alarm shall be set. After setting one of the probabilities, the other can be optimized through cooperative sensing. In this paper, we show that cooperating all secondary users in the network does not necessary achieve the optimum performance, but instead, it is achieved by cooperating a certain number of users with the highest primary user's signal to noise ratio. Computer simulations have shown that the Pd can increase from 92.03% to 99.88% and Pf can decrease from 6.02% to 0.06% in a network with 200 users. <s> BIB007 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Interference Mitigation and Spectrum Sharing. <s> Cooperative spectrum sensing has been shown to greatly improve the sensing performance in cognitive radio networks. However, if the cognitive users belong to different service providers, they tend to contribute less in sensing in order to achieve a higher throughput. In this paper, we propose an evolutionary game framework to study the interactions between selfish users in cooperative sensing. We derive the behavior dynamics and the stationary strategy of the secondary users, and further propose a distributed learning algorithm that helps the secondary users approach the Nash equilibrium with only local payoff observation. Simulation results show that the average throughput achieved in the cooperative sensing game with more than two secondary users is higher than that when the secondary users sense the primary user individually without cooperation. <s> BIB008 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Interference Mitigation and Spectrum Sharing. <s> A critical problem in open-spectrum communications is fairness with respect to the coexistence of heterogeneous systems with different resource units and traffic models. In addition, the sensing performances of different systems can also lead to unfair resource utilization between systems. To address this problem, we derive a continuous-time Markov chain model to show the effect of sensing performance on system coexistence. The analysis derived from this model is then used as the basis for a sensing threshold control (STC) scheme to achieve fairness. The proposed STC determines the sensing threshold for each system as a way of balancing resource utilization among systems, while guaranteeing target detection probability. Numerical results on the amount of resource utilization by each system demonstrate that the proposed STC achieves a full degree of fairness. <s> BIB009 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Interference Mitigation and Spectrum Sharing. <s> In this paper, a general framework for performance evaluation of cooperative spectrum sensing methods over realistic propagation environments is proposed. In particular, the framework accounts for correlated Log-Normal shadowing in both sensing and reporting channels, and yields simple and easy-to-use formulas for computing the Detection Probability of a distributed network of secondary users using Amplify and Forward (AF) relying for data reporting to the fusion center. Numerical results are also shown to substantiate the accuracy of the proposed framework. <s> BIB010 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Interference Mitigation and Spectrum Sharing. <s> We consider energy detection based spectrum sensing for opportunistic SU (Secondary User) transmissions in cognitive radio networks. Due to the time-varying nature of wireless fading channels and PU (Primary User) activities, the instantaneous SINR (Signal to Interference plus Noise Ratio) at the SU receiver changes from slot to slot in a time-slotted system. Unlike the conventional energy detector which uses a fixed value of energy threshold to detect the PU's occurrence, we let the SU transmitter dynamically adjust the threshold according to the instantaneous SINR. Under the constraint of limiting the average interference to the PU within a target level, the objective is to maximize the SU's average transmission rate and throughput. Our task is to determine a proper policy function for threshold control, which formulates the value of the threshold as a function of the SINR to achieve the above objective. In particular, we consider a linear policy function, which allows a higher threshold and thus more aggressive SU transmissions under a larger SINR. Simulation results show that the SU's average transmission rate can be significantly improved using the optimized policy function. <s> BIB011 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Interference Mitigation and Spectrum Sharing. <s> This work focus on a coexistence study between wireless microphone systems and secondary users of the TV White Spaces, using a Monte-Carlo methodology. Exclusion areas around wireless microphone receivers, for co-channel and adjacent channel interference, are computed, considering indoor and outdoor scenarios. Using this methodology, impact and tendencies of several parameters over the probability of interference are analyzed, like spectral channel spacing, separation distance and propagation scenario. As an example, for outdoor scenarios, the spectral spacing between primary system and secondary users, ranging from 0 MHz (co-channel operation) to 16 MHz (2 DVB-T channels) results in a protection distance of 13.9 km and 2.2 km, respectively. <s> BIB012
|
Interference mitigation is a very challenging issue especially in areas where availability of channels is limited or where there is overlapping between coverage areas of different networks. This is further accentuated considering the good propagation characteristics of TVWS signals. In environments where heterogeneous networks coexist in TVWS, there are two types of interference that need to be addressed: (i) Interference to and from primary users (ii) Interference among secondary devices or networks To ensure protection of primary users and measure the interference level, FCC Spectrum Policy Task Force has proposed a new metric named interference temperature . Interference temperature is the level of RF power measured in receiving antenna per unit bandwidth . ( , ), is the interference power (in Watts) for frequency f and bandwidth B (in Hz), while k = 1.38 23 Joules per degree Kelvin, is the Boltzmann constant. For a specific location and frequency band, FCC has also established the interference temperature limit, which should not be exceeded by secondary users when allowed to operate simultaneously with the primary user BIB004 . The configuration of the interference temperature limit is further discussed in . Interference from primary users to secondary ones, on the other hand, results from the high transmission power of primary users, e.g., TV stations. In addition to causing interference that will invariably degrade the performance of secondary users, it also may hinder secondary users from detecting the location of primary receivers. To tackle primary/secondary interference there are two types of interference mitigation techniques: interference avoidance and interference control. With interference avoidance, primary and secondary users are not allowed to use the same channel in the same time or the same frequency, and in order to coexist they must detect spectrum gaps and then employ time or frequency separation, i.e., TDMA or FDMA. Using interference control, primary and secondary users can coexist in the same time or frequency if they follow specific coexisting requirements, such as set limits of allowed level of interference, which will guarantee QoS (Quality of Service) for both types of users. While the interference to primary user has been widely investigated, the interference to the secondary user from the primary ones as well as the aggregated interference to secondary users among themselves has not gained as much attention. In most cases the secondary users are assumed to be idle or their degradation of performance is not accounted for. Interference among secondary devices becomes a challenging issue as the number of secondary users/networks that will try to access the spectrum opportunistically increases. Such interference, may also affect primary signal detection as shown in BIB002 . The problem is worsened in areas with limited spectrum availability where many devices might choose the same channel or they will have to work on adjacent or cochannels. It was shown in BIB012 that cochannel interference is avoided by increasing the distance between primary and secondary users whereas adjacent channel interference is mitigated by avoiding the operating frequency between devices by at least three adjacent channels. To enable self-coexistence among secondary networks/devices, several parameters can be adjusted: power control due to the different transmits power levels from different devices, SINR (Signal to Noise Ratio) to estimate PER (Packet error rate), bandwidth and adaptive receiver threshold BIB011 . Self-coexistence is also ensured through spectrum sharing and management among different wireless technologies. Because they are generally expected to have different communications characteristics, this poses an important challenge for cognitive radio networks in TVWS. Based on access priorities, spectrum sharing among wireless heterogeneous systems is classified in two groups: open spectrum sharing and hierarchical spectrum sharing. With open spectrum sharing every system, both primary and secondary have the same priority for accessing the spectrum BIB003 BIB009 . Since in this type of spectrum sharing heterogeneous systems coexist without centralized coordination, spectrum access etiquette is proposed to mitigate the 6 Wireless Communications and Mobile Computing interference and give fairness among users BIB001 . By contrast hierarchical spectrum sharing is when primary users always have priority for spectrum access, while secondary users need to make sure that the interference caused to primary users is not harmful before accessing the spectrum. Based on the impact of interference, hierarchical spectrum sharing is divided into two groups: underlay and overlay spectrum sharing . Underlay spectrum sharing is when the interference caused by a secondary user to a primary receiver is below a predefined threshold. Since the interference in this case is not harmful, the secondary user will be allowed to operate even if the primary user is active. To make this possible, the secondary user must have the channel gain information between its transmitter and the primary receiver . Different interference measurement schemes have been proposed in BIB002 BIB005 . On the other hand, in overlay spectrum sharing, a secondary user may transmit only if the primary user is not active at that time, which is referred to as the idle period . To detect this idle period, the secondary user needs to sense the spectrum. Sensing techniques are discussed later in the paper. Based on their ability and willingness to collaborate or not, there are two possible ways for different networks to access the spectrum. There are schemes for coexistence that are based on cooperative or noncooperative method . Cooperative method means that there has to be cooperation and communication between devices or networks that are sharing the spectrum and are within each other's' interference range. Cooperative methods are normally based on the ability to exchange information between networks of similar or different types. This method overcomes the hidden node problem as all the networks are aware of each other's' geographical positions. Using relays to pass the information among cognitive users that operate in the same band by using amplify and forward protocol was introduced as an idea in BIB006 . For this method there are different mechanisms that can be used, such as: TDMA (Time-Division Multiple Access), FDMA (Frequency-Division Multiple Access) and CDMA (Code-Division Multiple Access). However, considering that spectrum might be shared between heterogeneous networks that have different operational characteristics and requirements, such as: frame rate, guard bands, power allocation, etc., there are many challenges on implementing these techniques. Because of this, adopting a cooperative method for all secondary users may not be very useful as shown in BIB007 . The performance of cooperative spectrum sharing method in a more realistic propagation environment is investigated in BIB010 . Another major drawback in cooperative sensing method is the large amount of information that needs to be exchanged between secondary users inducing high overhead. To deal with the problem of overhead, the GUESS protocol was introduced in . Innoncooperative methods, different networks will make the decisions based on their own observations BIB008 . Different strategies are used for these methods such as: DFS (Dynamic Frequency Selection), DCS (Dynamic Channel Selection), power control, listen before talk, Energy Detection Threshold, etc. Even though this strategy is cheaper and easier to implement, it does not always give the best network performance in terms of throughput and fairness among networks and users.
|
A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Self-Coexistence Decision-Making Mechanisms for CR Networks in TVWS <s> We address the problem of coexistence among wireless networks in TV white space. We present a standard independent framework to enable exchange of information relevant for coexistence based on two mechanisms: centralized and distributed. Both mechanisms introduce the use of multiradio cluster-head equipment (CHE) as a physical entity that acquires relevant information, identifies coexistence opportunities, and implements autonomous coexistence decisions. The major conceptual difference between them lies in the fact that the centralized mechanism utilizes coexistence database(s) as a repository for coexistence related information, where CHEs need to access before making coexistence decisions. On the other hand, the distributed mechanism utilizes a broadcast channel to distribute beacons and directly convey coexistence information between CHEs. Furthermore, we give a concise overview of the current activities in international standardization bodies toward the realization of communications in TVWS along with measures taken to provide coexistence between secondary cognitive networks. <s> BIB001 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Self-Coexistence Decision-Making Mechanisms for CR Networks in TVWS <s> With the development of dynamic spectrum access technologies, such as cognitive radio, the secondary use of underutilized TV broadcast spectrum has come a step closer to reality. Recently, a number of wireless standards that incorporate CR technology have been finalized or are being developed to standardize systems that will coexist in the same TV white spaces. In these wireless standards, the widely studied problem of primary-secondary network coexistence has been addressed by the use of incumbent geolocation databases augmented with spectrum sensing techniques. However, the challenging problem of secondary-secondary coexistence?in particular, heterogeneous secondary coexistence- has garnered much less attention in the standards and related literature. The coexistence of heterogeneous secondary networks poses challenging problems due to a number of factors, including the disparity of PHY/MAC strategies of the coexisting systems. In this article, we discuss the mechanisms that have been proposed for heterogeneous coexistence, and propose a taxonomy of those mechanisms targeting TVWSs. Through this taxonomy, our aim is to offer a clear picture of the heterogeneous coexistence issues and related technical challenges, and shed light on the possible solution space. <s> BIB002
|
Without the use of coexistence mechanisms, the utilization of TVWS spectrum will be significantly reduced. It was shown in BIB001 that without the use of coexistence mechanisms, 92% of available spectrum is overlapped by neighboring networks. Based on the proposed architecture BIB002 , the coexistence mechanisms are classified into three groups: centralized, coordinated and autonomous mechanisms. The difference among these coexistence mechanisms relies on where the coexistence decision is made. (1) Centralized mechanisms -in order to mitigate the interference, these mechanisms use a database in which all coexistence information is collected and stored centrally. Then to pass the information to users, internetwork coordination channels are used BIB001 . However this solution is costly and also ineffective when there are many coexisting devices or even networks that do not want to be part of a centralized control system. (2) Distributed mechanisms -an internetwork coordination channel is proposed so there is no need for central coexistence infrastructure. All the decisions regarding interference mitigation are made individually by each network or device and then the information is passed to others through control channels. This solution also incurs communication overhead, and depends on the willingness of the networks to exchange information. Furthermore, it relies on the existence of a common control channel and assumes that all coexisting networks use the same access technology in order to be able to decode each other's messages (3) Autonomous mechanisms -there is no internetwork coordination channel or central infrastructure available. All the decisions for channel selection and interference mitigation are done only by individual observations. Possible techniques used for this case are: dynamic frequency/channel allocation technique and listen before talk. Even though these type of mechanisms are easy and cheap to implement they do not give a good network performance. Because each system aims at blindly maximizing their own performance, the internetwork interference severely degrades the overall network performance. The different coexistence mechanisms are presented in the Figure 2 . with only a handful of autonomous approaches available. In the following subsections, we start by introducing the IEEE 802.19.1 standards, and then we list and compare some of the CDM solutions proposed in the literature.
|
A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Signal-Based Schemes Tackling the Hidden Terminal Problem. <s> In this paper, we investigate the coexistence problem between the 802.22 and the 802.11af systems in the TV White Spaces (TVWS). We focus on the design of a co-channel coexistence scheme for the 802.22 customer-premises equipments (CPE) and the 802.11af systems. 802.22 and 802.11af are two typical standards envisioned to be widely adopted in the future. However, these two standards are heterogeneous in both power level and PHY/MAC design, making their coexistence challenging. To avoid mutual interference between the two systems, existing solutions have to allocate different channels for the two networks. Due to the city-wide coverage of the 802.22 base station (BS), the spectrum utilization is compromised with existing schemes. In this paper, we first identify the challenges to enable the co-channel coexistence of the 802.22 and the 802.11af systems and then propose a busy-tone based framework. We design a busy-tone for the 802.22 CPEs to exclude the hidden 802.11af terminals. We also show that it is possible for the 802.11af systems to identify the exposed 802.22 CPE transmitters and conduct successful transmissions under interference. We show through extensive simulations that the spectrum utilization can be increased with the proposed co-channel coexistence scheme. <s> BIB001 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Signal-Based Schemes Tackling the Hidden Terminal Problem. <s> Cognitive radio (CR) technologies have led to several wireless standards (e.g., IEEE 802.11af and IEEE 802.22) that enable secondary networks to access the TV white-space (TVWS) spectrum. Different unlicensed wireless technologies with different PHY/MAC designs are expected to coexist in the same TVWS spectrum-we refer to such a situation as heterogeneous coexistence. The heterogeneity of the PHY/MAC designs of coexisting CR networks can potentially exacerbate the hidden terminal problem. This problem cannot be addressed by the conventional handshaking/coordination mechanism between two homogeneous networks employing the same radio access technology. In this paper, we present a coexistence protocol, called Spectrum Sharing for Heterogeneous Coexistence (SHARE), that mitigates the hidden terminal problem for the coexistence between two types of networks: one that employs a time-division-multiplexing-based MAC protocol and one that employs a carrier-sense-multiple-access-based MAC protocol. Specifically, SHARE utilizes beacon transmissions and dynamic quiet periods to avoid packet collisions caused by the hidden terminals. Our analytical and simulation results show that SHARE reduces the number of packet collisions and guarantees weighted fairness in partitioning the spectrum among the coexisting secondary networks. <s> BIB002
|
A partially distributed scheme for spectrum sharing in TVWS, using beacon signals, is proposed in BIB002 . The work focuses specifically at the coexistence problem between TDM and CSMA MAC networks. As the authors underline, the fact that the two networks use different MAC protocols, poses serious challenges for spectrum sharing. The solution proposed, titled SHARE, specifically targets the problem of hidden terminals, which is particularly apparent when heterogeneous networks cohabit the same spectrum space. There are two types of collisions that can occur due to the hidden terminal problem: collisions at TDM receivers caused by hidden CSMA transmitters and vice versa. To mitigate the first group of collisions, the algorithm utilizes beacon signals to prevent CSMA transmitters to access the shared channel. To mitigate the other group of collisions, a dynamic quiet time period is proposed for the TDM transmitters, to reduce the probability of collisions and ensure long-term fairness in spectrum sharing among coexisting networks. The authors assume the presence of a 802.19.1 controller that manages the coexistence for the TDM based secondary networks, which are at all times registered with the 802.19.1 system and are completely synchronized with each other. A similar autonomous scheme is proposed in BIB001 , for enabling coexistence between IEEE 802.11af and 802.22 networks. The basic idea is to use the sensing antenna available at the 802.22 receiver (which normally remains unused during reception period), to send out a busy tone in order to protect its communications from hidden 802.11af terminals. The busy tone, a constant signal transmitted at the same power level as an 802.11af signal, is transmitted by the 802.22 receiver, while it simultaneously receives data from the 802.22 transmitter. The scheme's goal is to protect the communications within 802.22 network, but it does not address the reverse problem or the fairness achieved during channel access. Furthermore, the authors assume that all 802.22 devices, both the base stations and the mobile users, are equipped with two antennas, one of which is used exclusively for sensing. The problem of continuous primary user is largely ignored in both approaches, and the authors in BIB002 explicitly assume that the secondary networks obtain the list of available channels from a TVWS database via 802.19.1 air interface. Therefore, while it is indeed a partially autonomous algorithm, its performance relies heavily on centralized exchange of information. On the other hand, due to the asymmetric transmit powers; neither scheme is able to ensure fairness for the low-power 802.11 networks.
|
A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Distributed and Decentralized Approaches for SelfCoexistence of CR Networks. <s> The depletion of usable radio frequency spectrum has stimulated increasing interest in dynamic spectrum access technologies, such as cognitive radio (CR). In a scenario where multiple co-located CR networks operate in the same swath of white-space (or unlicensed) spectrum with little or no direct coordination, co-channel self-coexistence is a challenging problem. In this paper, we focus on the problem of spectrum sharing among coexisting CR networks that employ orthogonal frequency division multiple access (OFDMA) in their uplink and do not rely on inter-network coordination. An uplink soft frequency reuse (USFR) technique is proposed to enable globally power-efficient and locally fair spectrum sharing. We frame the self-coexistence problem as a non-cooperative game. In each network cell, uplink resource allocation (URA) problem is decoupled into two subproblems: subchannel allocation (SCA) and transmit power control (TPC). We provide a unique optimal solution to the TPC subproblem, while presenting a low-complexity heuristic for the SCA subproblem. After integrating the SCA and TPC games as the URA game, we design a heuristic algorithm that achieves the Nash equilibrium in a distributed manner. In both multi-operator and single-operator coexistence scenarios, our simulation results show that USFR significantly improves self-coexistence in spectrum utilization, power consumption, and intra-cell fairness. <s> BIB001 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Distributed and Decentralized Approaches for SelfCoexistence of CR Networks. <s> Very recently, regulatory bodies worldwide have approved dynamic access of unlicensed networks to the TV white space (TVWS) spectrum. Hence, in the near future, multiple heterogeneous and independently operated unlicensed networks will coexist within the same geographical area over shared TVWS. Although heterogeneity and coexistence are not unique to TVWS scenarios, their distinctive characteristics pose new and challenging issues. In this paper, the problem of the coexistence interference among multiple heterogeneous and independently operated secondary networks (SNs) in the absence of secondary cooperation is addressed. Specifically, the optimal coexistence strategy, which adaptively and autonomously selects the channel maximizing the expected throughput in the presence of coexistence interference, is designed. More in detail, at first, an analytical framework is developed to model the channel selection process for an arbitrary SN as a decision process. Then, the problem of the optimal channel selection, i.e., the channel maximizing the expected throughput, is proved to be computationally prohibitive (NP-hard). Finally, under the reasonable assumption of identically distributed interference on the available channels, the optimal channel selection problem is proved not to be NP-hard, and a computationally efficient (polynomial-time) algorithm for finding the optimal strategy is designed. Numerical simulations validate the theoretical analysis. <s> BIB002 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Distributed and Decentralized Approaches for SelfCoexistence of CR Networks. <s> IEEE This paper focuses on coexistence and self- coexistence challenges between secondary heterogeneous wireless networks/users sharing TV Whitespace spectrum. The coexistence problems arise from having several primary and secondary networks of different technologies cohabiting the same licensed spectrum simultaneously. The self- coexistence problems arise from many secondary systems /users coexisting at the same place while using identical or different technologies. In particular, fair distribution of available spectrum becomes a serious issue. In this work we use a game theoretic approach to model the self-coexistence problem as a competitive game between secondary networks. We show that our game belongs to the class of congestion-averse games which are known to posses pure Nash Equilibria. This leads us to a decentralized approach for spectrum sharing among systems with different PHY/MAC characteristics. We show that our proposal outperforms other centralized algorithms in terms of user fairness and per-user theoretical data rates. <s> BIB003
|
A game theoretic approach for solving the coexistence problem between cognitive radio networks sharing the same spectrum in the uplink is proposed by Gao et. al. in BIB001 . The problem is formulated as an uplink channel allocation problem, which is further subdivided into two subproblems: the subchannel allocation problem and the transmit power allocation problem. The cognitive radio networks, which in the worst case scenario are controlled by different operators, participate in a noncooperative game in which each CR network independently selects the channels it will use in a way that will maximize their own utilities. The game is played in two levels, where players' solve the subchannel allocation game and then in the second level solve the transmission power problem. The authors note, that the second level game has a Nash equilibrium, which can be reached using algorithms such as iterative water filling. The subchannel allocation game, on the other hand, does not possess the properties which guarantee a Nash equilibrium. The authors propose a practical heuristic algorithm, which does not reach the global optimum, but is more efficient as it does not require global knowledge about all the cells operating in the same space. However, the authors address the coexistence problem only in the uplink and on a single channel basis, assuming all other channels have the same characteristics. Furthermore, they do not consider the presence of heterogeneous networks but rather assume that all secondary networks are of the same type, but belonging to different operators. Therefore, issues arising due to differences in MAC/PHY layers are not addressed. An interference avoidance strategy is proposed in BIB002 which aims to adaptively and autonomously enable the CR networks select the channels so as to maximize the throughput in presence of coexistence interference. The coexistence problem is formulated as an optimal sensing sequence and optimal stopping rule optimization problem, with the objective of maximizing the expected reward, i.e., average throughput achievable by the secondary user in a given time-slot. The algorithm is attractive because it features no cooperation overhead among the various networks, i.e., each network independently selects the TVWS channels to use. However, some of the assumptions are too simplistic, such as the hypothesis of identically and independently distributed coexistence interference levels. Furthermore, the authors compare the performance of the algorithm only to a simple sense-before-talk algorithm in terms of expected average throughput, while the complexity of the algorithm is compared to an exhaustive search solution, which is known to have excessive computational times, especially when considering higher number of available channels. In BIB003 the authors propose a decentralized algorithm to address the problem of self-coexistence in TVWS, between secondary networks of three different types, IEEE 802.22, IEEE 802.11af and IEEE.802.15. They consider the usage of independent mechanisms where there is no central manager for decision-making, no database for information queries and storage and no common physical communication channel between the networks for information exchange. The selfcoexistence and interference mitigation are ensured only based on the individual observations of the secondary users, which means that there is no need to synchronize and coordinate between networks to reach a fair solution. The SCDM algorithm is based on congestion-averse games (CAG) for self-coexistence decision-making in TVWS and addresses the challenges of self-coexistence in terms of fairness and efficiency of resource allocation. For comparative purposes to centralized solutions, the same the game is solved also in centralized manner where the authors assume the presence of a controller with global knowledge who applies the CAG algorithm on behalf of networks.
|
A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Centralized Approaches for Self-Coexistence of CR Networks. <s> This paper focuses on the problem of spectrum sharing between secondary networks that access spectrum opportunistically in TV spectrum. Compared to the coexistence problem in the ISM (Industrial, Scientific and Medical) bands, the coexistence situation in TV whitespace (TVWS) is potentially more complex and challenging due to the signal propagation characteristics in TVWS and the disparity of PHY/MAC strategies employed by the systems coexisting in it. In this paper, we propose a novel decision making algorithm for a system of coexistence mechanisms, such as an IEEE 802.19.1-compliant system, that enables coexistence of dissimilar TVWS networks and devices. Our algorithm outperforms existing coexistence decision making algorithms in terms of fairness, and percentage of demand serviced. <s> BIB001 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Centralized Approaches for Self-Coexistence of CR Networks. <s> Licensed white space channels can now be used opportunistically by unlicensed users, provided the channels are relinquished when needed by the primary users. In order to maximize their potential, these channels need to be assigned to the secondary users in an efficient manner. The protocols to enable such an assignment need to simultaneously aim for fairness, high throughput, low overhead, and low rate of channel reconfigurations. One way of channel assignment is to allow neighboring access points (APs) to operate on the same channel. However, if not done properly, this may increase the number of collisions resulting in lower throughput. In this paper, we present a new channel assignment algorithm that performs controlled channel sharing among neighboring APs that increases not only the fairness but also the total throughput of the APs. Controlled sharing and assignment of channels leads to a new problem that we call as the Shared Coloring Problem. We design a protocol based on a centralized algorithm, called Share, and its localized version, lShare that work together to meet the objectives. The algorithm has tight bounds on fairness and it provides high system throughput. We also show how the 802.22 MAC layer protocol for wireless regional area networks (WRANs) can be modified considering the typical case of low degree of interference resulting from the operations of Share and lShare. Results from extensive ns-3 simulations based on data traces show that our protocol increases the minimum throughput among all APs by at least 58 percent when compared to the baseline algorithms. <s> BIB002 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> Centralized Approaches for Self-Coexistence of CR Networks. <s> Channel sharing in TV whitespace (TVWS) is challenging because of signal propagation characteristics and diversity in network technologies employed by secondary networks coexisting in TVWS. In this paper, the TVWS sharing problem is modeled as a multiobjective optimization problem, where each objective function tackles an important coexisting requirement, such as interference and disparity in network technologies. We propose an evolutionary algorithm that shares the TVWS among coexisting networks taking care of their channel occupancy requirements. In this paper, the channel occupancy is defined as the time duration; a network desires to radiate on a channel to achieve its desired duty cycle. Simulation results show that the proposed algorithm outperforms existing TVWS sharing algorithms regarding allocation fairness and a fraction of channel occupancy requirements of the coexisting networks. <s> BIB003
|
In BIB001 the authors propose a centralized algorithm that deals with the problem of spectrum sharing among secondary networks and compare the results to other CDM algorithms that are specified in the IEEE 802.19.1 standard. The algorithm is called Fair Algorithm for Coexistence decision-making in TV whitespace (FACT). Constrains considered in the decision-making process are: contiguous channel allocation, interference, fairness, channel allocation invariability and transmission scheduling constraints. The results showed that FACT algorithm outperforms two other algorithms based on the overall system performance in terms of fairness and percentage of demand serviced. However, being a centralized algorithm, there are evident drawbacks due to the amount of communication overhead and complexity. Indeed the gains in performance compared to the other two algorithms come at the price of higher computational running time. Furthermore, the algorithm cannot guarantee fairness, once the available channels are insufficient to accommodate the users' demands. The authors in BIB003 , on the other hand, formulate the coexistence problem as a multiobjective optimization problem and propose a centralized evolutionary algorithm that shares the TVWS among coexisting networks so that the allocation satisfy the channel occupancy requirements of each network. The objectives modeled include fairness, system throughput maximization and users' demand satisfaction. The authors compare the performance of their algorithm to two other centralized solutions, detailed in BIB001 BIB002 , and show that while their algorithm does not significantly improve system throughput and spectral efficiency, it ranks significantly higher in the fairness indicator, measured using the Jain index. While the authors show that the computation time is significantly shorter than the FACT algorithm, they do not address the overhead incurred, which is significant in both algorithms.
|
A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> The Comparative Analysis of Coexistence Algorithms in TV White <s> This paper focuses on the problem of spectrum sharing between secondary networks that access spectrum opportunistically in TV spectrum. Compared to the coexistence problem in the ISM (Industrial, Scientific and Medical) bands, the coexistence situation in TV whitespace (TVWS) is potentially more complex and challenging due to the signal propagation characteristics in TVWS and the disparity of PHY/MAC strategies employed by the systems coexisting in it. In this paper, we propose a novel decision making algorithm for a system of coexistence mechanisms, such as an IEEE 802.19.1-compliant system, that enables coexistence of dissimilar TVWS networks and devices. Our algorithm outperforms existing coexistence decision making algorithms in terms of fairness, and percentage of demand serviced. <s> BIB001 </s> A Survey on Coexistence in Heterogeneous Wireless Networks in TV White Spaces <s> The Comparative Analysis of Coexistence Algorithms in TV White <s> IEEE This paper focuses on coexistence and self- coexistence challenges between secondary heterogeneous wireless networks/users sharing TV Whitespace spectrum. The coexistence problems arise from having several primary and secondary networks of different technologies cohabiting the same licensed spectrum simultaneously. The self- coexistence problems arise from many secondary systems /users coexisting at the same place while using identical or different technologies. In particular, fair distribution of available spectrum becomes a serious issue. In this work we use a game theoretic approach to model the self-coexistence problem as a competitive game between secondary networks. We show that our game belongs to the class of congestion-averse games which are known to posses pure Nash Equilibria. This leads us to a decentralized approach for spectrum sharing among systems with different PHY/MAC characteristics. We show that our proposal outperforms other centralized algorithms in terms of user fairness and per-user theoretical data rates. <s> BIB002
|
Spaces. In this section we present a comparative analysis between some of the coexistence decision-making algorithms listed above, namely the FACT algorithm presented in BIB001 and the CAG algorithm presented in BIB002 . Algorithms are compared in terms of demand, fairness, and achieved theoretical throughput. Firstly the algorithms are compared in their ability to satisfy bandwidth demand of the networks. We observe in Figure 4 (left) that centralized CAG algorithm significantly outperforms FACT algorithm, while the decentralized CAG algorithm has poorer performance but still performs better than FACT. Secondly we evaluated the fairness of each algorithm. As shown in Figure 4 (right) CAG game solved centrally performs best in terms of fairness and even though FACT performance increases steadily as the number of available channels increases it does not outperform the decentralized CAG. Lastly the algorithms are compared based on the theoretical data rates obtained by each individual user. We observe in Figure 5 that in general the rates obtained with CAG are These results imply that even when networks independently make decisions, the performances are remarkable considering that decentralized implementation does not require overhead or global knowledge. The results are that when the number of channels available is less than demand of the CR networks both cooperative and noncooperative method are able to deliver similar performance in terms of throughput. Things change when the number of channels is increased over the number of channels required. The results shows that the noncooperative method may deliver better results in terms of network throughput, because the centralized/zcooperative methods require more computational time in order to decide which is the best channel allocation. We need to bear in mind that longer computational time limits the time available for users to transmit and receive information. On the other hand, if the number of devices requiring the spectrum is increased, the centralized/cooperative method has better performance compared to noncooperative method in terms of fairness.
|
A survey on platforms for big data analytics <s> Peer-to-peer networks <s> Parallel computing is now popular and mainstream, but performance and ease of use remain elusive to many end-users. There exists a need for performance improvements that can be easily retrofitted to existing parallel applications. In this paper we present MPI process swapping, a simple performance enhancing add-on to the MPI programming paradigm. MPI process swapping improves performance by dynamically choosing the best available resources throughout application execution, using MPI process over-allocation and real-time performance measurement. Swapping provides fully automated performance monitoring and process management, and a rich set of primitives to control execution behavior manually or through an external tool. Swapping, as defined in this implementation, can be added to iterative MPI applications and requires as few as three lines of source code change. We verify our design for a particle dynamics application on desktop resources within a production commercial environment. <s> BIB001 </s> A survey on platforms for big data analytics <s> Peer-to-peer networks <s> Part I Peer-to-Peer: Notion, Areas, History and Future: What is this Peer-to-Peer about?- Past and Future.- Application Areas.- Part II Unstructured Peer-to-Peer Systems: First and Second Generation of Peer-to-Peer Systems.- Random Graphs, Small-Worlds and Scale-Free Networks.- Part III Structured Peer-to-Peer Systems: Distributed Hash Tables.- Selected DHT Algorithms.- Reliability and Load Balancing in DHTs.- P-Grid: Dynamics of Self-Organizing Processes in Structured P2P Systems.- Part IV Peer-to-Peer-Based Applications: Application-Layer Multicast.- ePost.- Distributed Computing - GRID Computing.- Web Services and Peer-to-Peer.- Part V Self-Organization: Characterization of Self-Organization.- Self-Organization in Peer-to-Peer Systems.- Part VI Search and Retrieval: Peer-to-Peer Search and Scalability.- Algorithmic Aspects of Overlay Networks.- Schema-Based Peer-to-Peer Systems.- Supporting Information Retrieval in Peer-to-Peer Systems.- Hybrid Peer-to-Peer Systems.- Part VII Peer-to-Peer Traffic and Performance Evaluation: ISP Platforms under a Heavy Peer-to-Peer Workload.- Traffic Characteristics and Performance Evaluation of Peer-to-Peer Systems.- Part VIII Peer-to-Peer in Mobile and Ubiquitous Environments: Peer-to-Peer in Mobile Environments.- Spontaneous Collaboration in Mobile P2P Networks.- Epidemic Data Dissemination for Mobile Peer-to-Peer Lookup Services.- Peer-to-Peer and Ubiquitious Computing.- Part IX Business Applications and Markets: Business Applications and Revenue Models.- Peer-to-Peer Market Management.- A Peer-to-Peer Framework for Electronic Markets.- Part X Advanced Issues: Security-Related Issues in Peer-to-Peer Networks.- Accounting in Peer-to-Peer Systems.- The PlanetLab Platform <s> BIB002 </s> A survey on platforms for big data analytics <s> Peer-to-peer networks <s> The term “peer-to-peer” (P2P) refers to a class of systems and applications that employ distributed resources to perform a critical function in a decentralized manner. With the pervasive deployment of computers, P2P is increasingly receiving attention in research, product development, and investment circles. This interest ranges from enthusiasm, through hype, to disbelief in its potential. Some of the benefits of a P2P approach include: improving scalability by avoiding dependency on centralized points; eliminating the need for costly infrastructure by enabling direct communication among clients; and enabling resource aggregation. This survey reviews the field of P2P systems and applications by summarizing the key concepts and giving an overview of the most important systems. Design and implementation issues of P2P systems are analyzed in general, and then revisited for each of the case studies described in Section 6. This survey will help people understand the potential benefits of P2P in the research community and industry. For people unfamiliar with the field it provides a general overview, as well as detailed case studies. It is also intended for users, developers, and information technologies maintaining systems, in particular comparison of P2P solutions with alternative architectures and <s> BIB003
|
Peer-to-Peer networks BIB003 BIB002 involve millions of machines connected in a network. It is a decentralized and distributed network architecture where the nodes in the networks (known as peers) serve as well as consume resources. It is one of the oldest distributed computing platforms in existence. Typically, Message Passing Interface (MPI) is the communication scheme used in such a setup to communicate and exchange the data between peers. Each node can store the data instances and the scale out is practically unlimited (can be millions of nodes). The major bottleneck in such a setup arises in the communication between different nodes. Broadcasting messages in a peer-to-peer network is cheaper but the aggregation of data/results is much more expensive. In addition, the messages are sent over the network in the form of a spanning tree with an arbitrary node as the root where the broadcasting is initiated. MPI, which is the standard software communication paradigm used in this network, has been in use for several years and is well-established and thoroughly debugged. One of the main features of MPI includes the state preserving process i.e., processes can live as long as the system runs and there is no need to read the same data again and again as in the case of other frameworks such as MapReduce (explained in section "Apache hadoop"). All the parameters can be preserved locally. Hence, unlike MapReduce, MPI is well suited for iterative processing BIB001 . Another feature of MPI is the hierarchical master/slave paradigm. When MPI is deployed in the master-slave model, the slave machine can become the master for other processes. This can be extremely useful for dynamic resource allocation where the slaves have large amounts of data to process. MPI is available for many programming languages. It includes methods to send and receive messages and data. Some other methods available with MPI are 'Broadcast', which is used to broadcast the data or messages over all the nodes and 'Barrier', which is another method that can put a barrier and allows all the processes to synchronize and reach up to a certain point before proceeding further. Although MPI appears to be perfect for developing algorithms for big data analytics, it has some major drawbacks. One of the primary drawbacks is the fault intolerance since MPI has no mechanism to handle faults. When used on top of peer-to-peer networks, which is a completely unreliable hardware, a single node failure can cause the entire system to shut down. Users have to implement some kind of fault tolerance mechanism within the program to avoid such unfortunate situations. With other frameworks such as Hadoop (that are robust to fault tolerance) becoming widely popular, MPI is not being widely used anymore.
|
A survey on platforms for big data analytics <s> MapReduce <s> MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper. ::: ::: Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. ::: ::: Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google's clusters every day. <s> BIB001 </s> A survey on platforms for big data analytics <s> MapReduce <s> A prominent parallel data processing tool MapReduce is gaining significant momentum from both industry and academia as the volume of data to analyze grows rapidly. While MapReduce is used in many areas where massive data analysis is required, there are still debates on its performance, efficiency per node, and simple abstraction. This survey intends to assist the database and open source communities in understanding various technical aspects of the MapReduce framework. In this survey, we characterize the MapReduce framework and discuss its inherent pros and cons. We then introduce its optimization strategies reported in the recent literature. We also discuss the open issues and challenges raised on parallel data analysis with MapReduce. <s> BIB002
|
The programming model used in Hadoop is MapReduce BIB001 which was proposed by Dean and Ghemawat at Google. MapReduce is the basic data processing scheme used in Hadoop which includes breaking the entire task into two parts, known as mappers and reducers. At a high-level, mappers read the data from HDFS, process it and generate some intermediate results to the reducers. Reducers are used to aggregate the intermediate results to generate the final output which is again written to HDFS. A typical Hadoop job involves running several mappers and reducers across different nodes in the cluster. A good survey about MapReduce for parallel data processing is available in BIB002 .
|
A survey on platforms for big data analytics <s> MapReduce wrappers <s> There is a growing need for ad-hoc analysis of extremely large data sets, especially at internet companies where innovation critically depends on being able to analyze terabytes of data collected every day. Parallel database products, e.g., Teradata, offer a solution, but are usually prohibitively expensive at this scale. Besides, many of the people who analyze this data are entrenched procedural programmers, who find the declarative, SQL style to be unnatural. The success of the more procedural map-reduce programming model, and its associated scalable implementations on commodity hardware, is evidence of the above. However, the map-reduce paradigm is too low-level and rigid, and leads to a great deal of custom user code that is hard to maintain, and reuse. We describe a new language called Pig Latin that we have designed to fit in a sweet spot between the declarative style of SQL, and the low-level, procedural style of map-reduce. The accompanying system, Pig, is fully implemented, and compiles Pig Latin into physical plans that are executed over Hadoop, an open-source, map-reduce implementation. We give a few examples of how engineers at Yahoo! are using Pig to dramatically reduce the time required for the development and execution of their data analysis tasks, compared to using Hadoop directly. We also report on a novel debugging environment that comes integrated with Pig, that can lead to even higher productivity gains. Pig is an open-source, Apache-incubator project, and available for general use. <s> BIB001 </s> A survey on platforms for big data analytics <s> MapReduce wrappers <s> DryadLINQ is a system and a set of language extensions that enable a new programming model for large scale distributed computing. It generalizes previous execution environments such as SQL, MapReduce, and Dryad in two ways: by adopting an expressive data model of strongly typed .NET objects; and by supporting general-purpose imperative and declarative operations on datasets within a traditional high-level programming language. ::: ::: A DryadLINQ program is a sequential program composed of LINQ expressions performing arbitrary side-effect-free transformations on datasets, and can be written and debugged using standard .NET development tools. The DryadLINQ system automatically and transparently translates the data-parallel portions of the program into a distributed execution plan which is passed to the Dryad execution platform. Dryad, which has been in continuous operation for several years on production clusters made up of thousands of computers, ensures efficient, reliable execution of this plan. ::: ::: We describe the implementation of the DryadLINQ compiler and runtime. We evaluate DryadLINQ on a varied set of programs drawn from domains such as web-graph analysis, large-scale log mining, and machine learning. We show that excellent absolute performance can be attained--a general-purpose sort of 1012 Bytes of data executes in 319 seconds on a 240-computer, 960- disk cluster--as well as demonstrating near-linear scaling of execution time on representative applications as we vary the number of computers used for a job. <s> BIB002 </s> A survey on platforms for big data analytics <s> MapReduce wrappers <s> The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop [3] is a popular open-source map-reduce implementation which is being used as an alternative to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse. <s> BIB003
|
A certain set of wrappers are currently being developed for MapReduce. These wrappers can provide a better control over the MapReduce code and aid in the source code development. The following wrappers are being widely used in combination with MapReduce. Apache Pig is a SQL-like environment developed at Yahoo BIB001 is being used by many organizations like Yahoo, Twitter, AOL, LinkedIn etc. Hive is another MapReduce wrapper developed by Facebook BIB003 . These two wrappers provide a better environment and make the code development simpler since the programmers do not have to deal with the complexities of MapReduce coding. Programming environments such as DryadLINQ, on the other hand, provide the end users with more flexibility over the MapReduce by allowing the users to have more control over the coding. It is a C# like environment developed at Microsoft Research BIB002 . It uses LINQ (a parallel language) and a cluster execution environment called Dryad. The advantages include better debugging and development using Visual Studio as the tool and interoperation with other languages such as standard .NET. In addition to these wrappers, some researchers have also developed scalable machine learning libraries such as Mahout [14] using MapReduce paradigm.
|
A survey on platforms for big data analytics <s> Limitations of MapReduce <s> Most scientific data analyses comprise analyzing voluminous data collected from various instruments. Efficient parallel/concurrent algorithms and frameworks are the key to meeting the scalability and performance requirements entailed in such scientific data analyses. The recently introduced MapReduce technique has gained a lot of attention from the scientific community for its applicability in large parallel data analyses. Although there are many evaluations of the MapReduce technique using large textual data collections, there have been only a few evaluations for scientific data analyses. The goals of this paper are twofold. First, we present our experience in applying the MapReduce technique for two scientific data analyses: (i) high energy physics data analyses; (ii) K-means clustering. Second, we present CGL-MapReduce, a streaming-based MapReduce implementation and compare its performance with Hadoop. <s> BIB001 </s> A survey on platforms for big data analytics <s> Limitations of MapReduce <s> The growing demand for large-scale data mining and data analysis applications has led both industry and academia to design new types of highly scalable data-intensive computing platforms. MapReduce and Dryad are two popular platforms in which the dataflow takes the form of a directed acyclic graph of operators. These platforms lack built-in support for iterative programs, which arise naturally in many applications including data mining, web ranking, graph analysis, model fitting, and so on. This paper presents HaLoop, a modified version of the Hadoop MapReduce framework that is designed to serve these applications. HaLoop not only extends MapReduce with programming support for iterative applications, it also dramatically improves their efficiency by making the task scheduler loop-aware and by adding various caching mechanisms. We evaluated HaLoop on real queries and real datasets. Compared with Hadoop, on average, HaLoop reduces query runtimes by 1.85, and shuffles only 4% of the data between mappers and reducers. <s> BIB002 </s> A survey on platforms for big data analytics <s> Limitations of MapReduce <s> MapReduce programming model has simplified the implementation of many data parallel applications. The simplicity of the programming model and the quality of services provided by many implementations of MapReduce attract a lot of enthusiasm among distributed computing communities. From the years of experience in applying MapReduce to various scientific applications we identified a set of extensions to the programming model and improvements to its architecture that will expand the applicability of MapReduce to more classes of applications. In this paper, we present the programming model and the architecture of Twister an enhanced MapReduce runtime that supports iterative MapReduce computations efficiently. We also show performance comparisons of Twister with other similar runtimes such as Hadoop and DryadLINQ for large scale data parallel applications. <s> BIB003 </s> A survey on platforms for big data analytics <s> Limitations of MapReduce <s> Relational data are pervasive in many applications such as data mining or social network analysis. These relational data are typically massive containing at least millions or hundreds of millions of relations. This poses demand for the design of distributed computing frameworks for processing these data on a large cluster. MapReduce is an example of such a framework. However, many relational data based applications typically require parsing the relational data iteratively and need to operate on these data through many iterations. MapReduce lacks built-in support for the iterative process. This paper presents iMapReduce, a framework that supports iterative processing. iMapReduce allows users to specify the iterative operations with map and reduce functions, while supporting the iterative processing automatically without the need of users' involvement. More importantly, iMapReduce significantly improves the performance of iterative algorithms by (1) reducing the overhead of creating a new task in every iteration, (2) eliminating the shuffling of the static data in the shuffle stage of MapReduce, and (3) allowing asynchronous execution of each iteration, {it i.e.,} an iteration can start before all tasks of a previous iteration have finished. We implement iMapReduce based on Apache Hadoop, and show that iMapReduce can achieve a factor of 1.2 to 5 speedup over those implemented on MapReduce for well-known iterative algorithms. <s> BIB004 </s> A survey on platforms for big data analytics <s> Limitations of MapReduce <s> In this era of data abundance, it has become critical to process large volumes of data at much faster rates than ever before. Boosting is a powerful predictive model that has been successfully used in many real-world applications. However, due to the inherent sequential nature, achieving scalability for boosting is nontrivial and demands the development of new parallelized versions which will allow them to efficiently handle large-scale data. In this paper, we propose two parallel boosting algorithms, AdaBoost.PL and LogitBoost.PL, which facilitate simultaneous participation of multiple computing nodes to construct a boosted ensemble classifier. The proposed algorithms are competitive to the corresponding serial versions in terms of the generalization performance. We achieve a significant speedup since our approach does not require individual computing nodes to communicate with each other for sharing their data. In addition, the proposed approach also allows for preserving privacy of computations in distributed environments. We used MapReduce framework to implement our algorithms and demonstrated the performance in terms of classification accuracy, speedup and scaleup using a wide variety of synthetic and real-world data sets. <s> BIB005
|
One of the major drawbacks of MapReduce is its inefficiency in running iterative algorithms. MapReduce is not designed for iterative processes. Mappers read the same data again and again from the disk. Hence, after each iteration, the results have to be written to the disk to pass them onto the next iteration. This makes disk access a major bottleneck which significantly degrades the performance. For each iteration, a new mapper and reducer have to be initialized. Sometimes the MapReduce jobs are short-lived in which case the overhead of initialization of that task becomes a significant overhead to the task itself. Some workarounds such as forward scheduling (setting up the next MapReduce job before the previous one finishes) have been proposed. However, these approaches introduce additional levels of complexity in the source code. One such work called HaLoop BIB002 extends MapReduce with programming support for iterative algorithms and improves efficiency by adding caching mechanisms. CGL MapReduce BIB001 BIB005 is another work that focuses on improving the performance of MapReduce iterative tasks. Other examples of iterative MapReduce include Twister BIB003 and imapreduce BIB004 .
|
A survey on platforms for big data analytics <s> Berkeley data analytics stack (BDAS) <s> In this paper, we present BlinkDB, a massively parallel, approximate query engine for running interactive SQL queries on large volumes of data. BlinkDB allows users to trade-off query accuracy for response time, enabling interactive queries over massive data by running queries on data samples and presenting results annotated with meaningful error bars. To achieve this, BlinkDB uses two key ideas: (1) an adaptive optimization framework that builds and maintains a set of multi-dimensional stratified samples from original data over time, and (2) a dynamic sample selection strategy that selects an appropriately sized sample based on a query's accuracy or response time requirements. We evaluate BlinkDB against the well-known TPC-H benchmarks and a real-world analytic workload derived from Conviva Inc., a company that manages video distribution over the Internet. Our experiments on a 100 node cluster show that BlinkDB can answer queries on up to 17 TBs of data in less than 2 seconds (over 200 x faster than Hive), within an error of 2-10%. <s> BIB001 </s> A survey on platforms for big data analytics <s> Berkeley data analytics stack (BDAS) <s> From social networks to targeted advertising, big graphs capture the structure in data and are central to recent advances in machine learning and data mining. Unfortunately, directly applying existing data-parallel tools to graph computation tasks can be cumbersome and inefficient. The need for intuitive, scalable tools for graph computation has lead to the development of new graph-parallel systems (e.g., Pregel, PowerGraph) which are designed to efficiently execute graph algorithms. Unfortunately, these new graph-parallel systems do not address the challenges of graph construction and transformation which are often just as problematic as the subsequent computation. Furthermore, existing graph-parallel systems provide limited fault-tolerance and support for interactive data mining. We introduce GraphX, which combines the advantages of both data-parallel and graph-parallel systems by efficiently expressing graph computation within the Spark data-parallel framework. We leverage new ideas in distributed graph representation to efficiently distribute graphs as tabular data-structures. Similarly, we leverage advances in data-flow systems to exploit in-memory computation and fault-tolerance. We provide powerful new operations to simplify graph construction and transformation. Using these primitives we implement the PowerGraph and Pregel abstractions in less than 20 lines of code. Finally, by exploiting the Scala foundation of Spark, we enable users to interactively load, transform, and compute on massive graphs. <s> BIB002 </s> A survey on platforms for big data analytics <s> Berkeley data analytics stack (BDAS) <s> Machine learning (ML) and statistical techniques are key to transforming big data into actionable knowledge. In spite of the modern primacy of data, the complexity of existing ML algorithms is often overwhelming|many users do not understand the trade-os and challenges of parameterizing and choosing between dierent learning techniques. Furthermore, existing scalable systems that support machine learning are typically not accessible to ML researchers without a strong background in distributed systems and low-level primitives. In this work, we present our vision for MLbase, a novel system harnessing the power of machine learning for both end-users and ML researchers. MLbase provides (1) a simple declarative way to specify ML tasks, (2) a novel optimizer to select and dynamically adapt the choice of learning algorithm, (3) a set of high-level operators to enable ML researchers to scalably implement a wide range of ML methods without deep systems knowledge, and (4) a new run-time optimized for the data-access patterns of these high-level operators. <s> BIB003
|
The Spark developers have also proposed an entire data processing stack called Berkeley Data Analytics Stack (BDAS) [20] which is shown in Figure 2 . At the lowest level of this stack, there is a component called Tachyon [21] which is based on HDFS. It is a fault tolerant distributed file system which enables file sharing at memory-speed (data I/O speed comparable to system memory) across a cluster. It works with cluster frameworks such as Spark and MapReduce. The major advantage of Tachyon over Hadoop HDFS is its high performance which is achieved by using memory more aggressively. Tachyon can detect the frequently read files and cache them in memory thus minimizing the disk access by different jobs/queries. This enables the cached files to be read at memory speed. Another feature of Tachyon is its compatibility with Hadoop MapReduce. MapReduce programs can run over Tachyon without any modifications. The other advantage of using Tachyon is its support for raw tables. Tables with hundreds of columns can be loaded easily and the user can specify the frequently used columns to be loaded in memory for faster access. The second component in BDAS, which is the layer above Tachyon, is called Apache Mesos. Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications/frameworks. It supports Hadoop, Spark, Aurora [22] , and other applications on a dynamically shared pool of resources. With Mesos, scalability can be increased to tens of thousands of nodes. APIs are available in java, python and C++ for developing new parallel applications. It also includes multi-resource scheduling capabilities. The third component running on top of Mesos is Spark which takes the place of Hadoop MapReduce in the BDAS architecture. On the top of the stack are many Spark wrappers such as Spark Streaming (Large Scale real-time stream processing), Blink DB (queries with bounded errors and bounded response times on very large data) BIB001 , GraphX (Resilient distributed Graph System on Spark) BIB002 and MLBase (distributed machine learning library based on Spark) BIB003 . Recently, BDAS and Spark have been receiving a lot of attention due to their performance gain over Hadoop. Now, it is even possible to run Spark on Amazon Elastic Map-Reduce [26] . Although BDAS consists of many useful components in the top layer (for various applications), many of them are still in the early stages of development and hence the support is rather limited. Due to the vast number of tools that are already available for Hadoop MapReduce, it is still the most widely used distributed data processing framework.
|
A survey on platforms for big data analytics <s> Multicore CPU <s> This paper examines simultaneous multithreading, a technique permitting several independent threads to issue instructions to a superscalar's multiple functional units in a single cycle. We present several models of simultaneous multithreading and compare them with alternative organizations: a wide superscalar, a fine-grain multithreaded processor, and single-chip, multiple-issue multiprocessing architectures. Our results show that both (single-threaded) superscalar and fine-grain multithreaded architectures are limited their ability to utilize the resources of a wide-issue processor. Simultaneous multithreading has the potential to achieve 4 times the throughput of a superscalar, and double that of fine-grain multithreading. We evaluate several cache configurations made possible by this type of organization and evaluate tradeoffs between them. We also show that simultaneous multithreading is an attractive alternative to single-chip multiprocessors; simultaneous multithreaded processors with a variety of organizations outperform corresponding conventional multiprocessors with similar execution resources.While simultaneous multithreading has excellent potential to increase processor utilization, it can add substantial complexity to the design. We examine many of these complexities and evaluate alternative organizations in the design space. <s> BIB001 </s> A survey on platforms for big data analytics <s> Multicore CPU <s> This tutorial gives a broad view of modern approaches for scaling up machine learning and data mining methods on parallel/distributed platforms. Demand for scaling up machine learning is task-specific: for some tasks it is driven by the enormous dataset sizes, for others by model complexity or by the requirement for real-time prediction. Selecting a task-appropriate parallelization platform and algorithm requires understanding their benefits, trade-offs and constraints. This tutorial focuses on providing an integrated overview of state-of-the-art platforms and algorithm choices. These span a range of hardware options (from FPGAs and GPUs to multi-core systems and commodity clusters), programming frameworks (including CUDA, MPI, MapReduce, and DryadLINQ), and learning settings (e.g., semi-supervised and online learning). The tutorial is example-driven, covering a number of popular algorithms (e.g., boosted trees, spectral clustering, belief propagation) and diverse applications (e.g., recommender systems and object recognition in vision). The tutorial is based on (but not limited to) the material from our upcoming Cambridge U. Press edited book which is currently in production. Visit the tutorial website at http://hunch.net/~large_scale_survey/ <s> BIB002
|
Multicore refers to one machine having dozens of processing cores BIB002 . They usually have shared memory but only one disk. Over the past few years, CPUs have gained internal parallelism. More recently, the number of cores per chip and the number of operations that a core can perform has increased significantly. Newer breeds of motherboards allow multiple CPUs within a single machine thereby increasing the parallelism. Until the last few years, CPUs were mainly responsible for accelerating the algorithms for big data analytics. Figure 3 (a) shows a high-level CPU architecture with four cores. The parallelism in CPUs is mainly achieved through multithreading BIB001 . All the cores share the same memory. The task has to be broken down into threads. Each thread is executed in parallel on different CPU cores. Most of the programming languages provide libraries to create threads and use CPU parallelism. The most popular choice of such programming languages is Java. Since multicore CPUs have been around for several years, a large number of software applications and programming environments are well developed for this platform. The developments in CPUs are not at the same pace compared to GPUs. The number of cores per CPU is still in double digits with the processing power close to 10Gflops while a single GPU has more than 2500 processing cores with 1000Tflops of processing power. This massive parallelism in GPU makes it a more appealing option for parallel computing applications. The drawback of CPUs is their limited number of processing cores and their primary dependence on the system memory for data access. System memory is limited to a few hundred gigabytes and this limits the size of the data that a CPU can process efficiently. Once the data size exceeds the system memory, disk access becomes a huge bottleneck. Even if the data fits into the system memory, CPU can process data at a much faster rate than the memory access speed which makes memory access a bottleneck. GPU avoids this by making use of DDR5 memory compared to a slower DDR3 memory used in a system. Also, GPU has high speed cache for each multiprocessor which speeds up the data access.
|
A survey on platforms for big data analytics <s> Graphics processing unit (GPU) <s> We introduce GPUMiner, a novel parallel data mining system that utilizes new-generation graphics processing units (GPUs). Our system relies on the massively multi-threaded SIMD (Single Instruction, Multiple-Data) architecture provided by GPUs. As specialpurpose co-processors, these processors are highly optimized for graphics rendering and rely on the CPU for data input/output as well as complex program control. Therefore, we design GPUMiner to consist of the following three components: (1) a CPU-based storage and buffer manager to handle I/O and data transfer between the CPU and the GPU, (2) a GPU-CPU co-processing parallel mining module, and (3) a GPU-based mining visualization module. We design the GPU-CPU co-processing scheme in mining depending on the complexity and inherent parallelism of individual mining algorithms. We provide the visualization module to facilitate users to observe and interact with the mining process online. We have implemented the k-means clustering and the Apriori frequent pattern mining algorithms in GPUMiner. Our preliminary results have shown significant speedups over state-of-the-art CPU implementations on a PC with a G80 GPU and a quad-core CPU. We will demonstrate the mining process through our visualization module. Code and documentation of GPUMiner are available at http://code.google.com/p/gpuminer/. <s> BIB001 </s> A survey on platforms for big data analytics <s> Graphics processing unit (GPU) <s> GPU architectures are increasingly important in the multi-core era due to their high number of parallel processors. Programming thousands of massively parallel threads is a big challenge for software engineers, but understanding the performance bottlenecks of those parallel programs on GPU architectures to improve application performance is even more difficult. Current approaches rely on programmers to tune their applications by exploiting the design space exhaustively without fully understanding the performance characteristics of their applications. To provide insights into the performance bottlenecks of parallel applications on GPU architectures, we propose a simple analytical model that estimates the execution time of massively parallel programs. The key component of our model is estimating the number of parallel memory requests (we call this the memory warp parallelism) by considering the number of running threads and memory bandwidth. Based on the degree of memory warp parallelism, the model estimates the cost of memory requests, thereby estimating the overall execution time of a program. Comparisons between the outcome of the model and the actual execution time in several GPUs show that the geometric mean of absolute error of our model on micro-benchmarks is 5.4% and on GPU computing applications is 13.3%. All the applications are written in the CUDA programming language. <s> BIB002
|
Graphics Processing Unit (GPUs) is a specialized hardware designed to accelerate the creation of images in a frame buffer intended for display output [30] . Until the past few years, GPUs were primarily used for graphical operations such as video and image editing, accelerating graphics-related processing etc. However, due to their massively parallel architecture, recent developments in GPU hardware and related programming frameworks have given rise to GPGPU (general-purpose computing on graphics processing units) . GPU has large number of processing cores (typically around 2500+ to date) as compared to a multicore CPU. In addition to the processing cores, GPU has its own high throughput DDR5 memory which is many times faster than a typical DDR3 memory. GPU performance has increased significantly in the past few years compared to that of CPU. Recently, Nvidia has launched Tesla series of GPUs which are specifically designed for high performance computing. Nvidia has released the CUDA framework which made GPU programming accessible to all programmers without delving into the hardware details. These developments suggest that GPGPU is indeed gaining more popularity. Figure 3(b) shows a high-level GPU architecture with 14 multiprocessors and 32 streaming processors per block. It usually has two levels of parallelism. At the first level, there are several multiprocessors (MPs) and within each multiprocessor there are several streaming processors (SPs). To use this setup, GPU program is broken down into threads which execute on SPs and these threads are grouped together to form thread blocks which run on a multiprocessor. Each thread within a block can communicate with each other and synchronize with other threads in the same block. Each of these threads has access to small but extremely fast shared cache memory and larger global main memory. Threads in one block cannot communicate with the threads in the other block as they may be scheduled at different times. This architecture implies that for any job to be run on GPU, it has to be broken into blocks of computation that can run independently without communicating with each other BIB002 . These blocks will have to be further broken down into smaller tasks that execute on an individual thread that may communicate with other threads in the same block. GPUs have been used in the development of faster machine learning algorithms. Some libraries such as GPUMiner BIB001 implement few machine learning algorithms on GPU using the CUDA framework. Experiments have shown many folds speedup using the GPU compared to a multicore CPU. GPU has its own drawbacks. The primary drawback is the limited memory that it contains. With a maximum of 12GB memory per GPU (as of current generation), it is not suitable to handle terabyte scale data. Once the data size is more than the size of the GPU memory, the performance decreases significantly as the disk access becomes the primary bottleneck. Another drawback is the limited amount of software and algorithms that are available for GPUs. Because of the way in which the task breakdown is required for GPUs, not many existing analytical algorithms are easily portable to GPUs.
|
A survey on platforms for big data analytics <s> Field programmable gate arrays (FPGA) <s> Summary form only given. An overview of how FPGAs are impacting education is given with an emphasis on how laboratory experiences are used to enhance learning. Courses employing these devices include: introductory logic, advanced logic (ASIC Prototyping), system-on-chip and platform design, HW/SW codesign of real-time embedded systems, network routing, and multidisciplinary capstone design. Project examples are presented along with specific ways that instructors can collaborate more to enhance the students' experiences and their own productivity. <s> BIB001 </s> A survey on platforms for big data analytics <s> Field programmable gate arrays (FPGA) <s> High end network security applications demand high speed operation and large rule set support. Packet classification is the core functionality that demands high throughput in such applications. This paper proposes a packet classification architecture to meet such high throughput. We have implemented a Firewall with this architecture in reconflgurable hardware. We propose an extension to Distributed Crossproducting of Field Labels (DCFL) technique to achieve scalable and high performance architecture. The implemented Firewall takes advantage of inherent structure and redundancy of rule set by using our DCFL Extended (DCFLE) algorithm. The use of DCFLE algorithm results in both speed and area improvement when it is implemented in hardware. Although we restrict ourselves to standard 5-tuple matching, the architecture supports additional fields. High throughput classification invariably uses Ternary Content Addressable Memory (TCAM) for prefix matching, though TCAM fares poorly in terms of area and power efficiency. Use of TCAM for port range matching is expensive, as the range to prefix conversion results in large number of prefixes leading to storage inefficiency. Extended TCAM (ETCAM) is fast and the most storage efficient solution for range matching. We present for the first time a reconfigurable hardware implementation of ETCAM. We have implemented our Firewall as an embedded system on Virtex-II Pro FPGA based platform, running Linux with the packet classification in hardware. The Firewall was tested in real time with 1 Gbps Ethernet link and 128 sample rules. The packet classification hardware uses a quarter of logic resources and slightly over one third of memory resources of XC2VP30 FPGA. It achieves a maximum classification throughput of 50 million packet/s corresponding to 16 Gbps link rate for the worst case packet size. The Firewall rule update involves only memory re-initialization in software without any hardware change. <s> BIB002 </s> A survey on platforms for big data analytics <s> Field programmable gate arrays (FPGA) <s> The aim of this paper is to review the state-of-the-art of Field Programmable Gate Array (FPGA) technologies and their contribution to industrial control applications. Authors start by addressing various research fields which can exploit the advantages of FPGAs. The features of these devices are then presented, followed by their corresponding design tools. To illustrate the benefits of using FPGAs in the case of complex control applications, a sensorless motor controller has been treated. This controller is based on the Extended Kalman Filter. Its development has been made according to a dedicated design methodology, which is also discussed. The use of FPGAs to implement artificial intelligence-based industrial controllers is then briefly reviewed. The final section presents two short case studies of Neural Network control systems designs targeting FPGAs. <s> BIB003 </s> A survey on platforms for big data analytics <s> Field programmable gate arrays (FPGA) <s> Given the rapid evolution of attack methods and toolkits, software-based solutions to secure the network infrastructure have become overburdened. The performance gap between the execution speed of security software and the amount of data to be processed is ever widening. A common solution to close this performance gap is through hardware implementation of security functions. Possessing the flexibility of software and high parallelism of hardware, reconfigurable hardware devices, such as Field Programmable Gate Arrays (FPGAs), have become increasingly popular for this purpose. FPGAs support the performance demands of security operations as well as enable architectural and algorithm innovations in the future. This paper presents a survey of the state-of-art in FPGA-based implementations that have been used in the network infrastructure security area, categorizing currently existing diverse implementations. Combining brief descriptions with intensive case-studies, we hope this survey will inspire more active research in this area. <s> BIB004
|
FPGAs are highly specialized hardware units which are custom-built for specific applications . FPGAs can be highly optimized for speed and can be orders of magnitude faster compared to other platforms for certain applications. They are programmed using Hardware descriptive language (HDL) [35] . Due to customized hardware, the development cost is typically much higher compared to other platforms. On the software side, coding has to be done in HDL with a low-level knowledge of the hardware which increases the algorithm development cost. User has to carefully investigate the suitability of a particular application for FPGA as they are effective only for a certain set of applications. FPGAs are used in a variety of real-world applications BIB003 BIB001 . One example where FPGA was successfully deployed is in the network security applications BIB004 . In one such application, FPGA is used as a hardware firewall and is much faster than the software firewalls in scanning large amounts of network data BIB002 . In the recent years, the speed of multicore processors is reaching closer to that of FPGAs.
|
A review of methods for spike sorting : the detection and classification of neural action potentials <s> Measuring neural activity <s> A tungsten microelectrode with several small holes burnt in the vinyl insulation enables the action potentials from several adjacent neurons to be observed simultaneously. A digital computer is used to separate the contributions of each neuron by examining and classifying the waveforms of the action potentials. These methods allow studies to be made of interactions between neurons that lie close together. <s> BIB001 </s> A review of methods for spike sorting : the detection and classification of neural action potentials <s> Measuring neural activity <s> A spike separation technique which combines data processing methods with extracellular probing techniques to allow simultaneous observation of multiple neural events is presented. A preparation, the locust ventral cord, allows spike separation. Experimental results and simulation indicate the usefulness of the method for this preparation. A feature of the data processing method allows the experimenter to direct the machine classification by an initial classification. Subsequently, the machine returns an indication of the quality of classification, allowing a reclassification or termination. <s> BIB002
|
The first link between neural communication and electrical signals was made by Luigi Galvani in 1791 when he showed that frog muscles could be stimulated by electricity. It was not until the 1920s, however, that the nerve impulses could be measured directly by the amplification of electrical signals recorded by microelectrodes. The basic electrical circuit is shown in figure 1 . The circuit amplifies the the potential between the ground (usually measured by placing a wire under the scalp) and the tip of the microelectrode. The potential changes measured at the tip reflect current flow in the extracellular medium. Typically the largest component of this current is that generated by the action potential, but there can be many other, less prominent components. Signals that look much like cellular action potentials can be picked up from axonal fibre bundles, also called fibres of passage. These signals are typically much more localized and small than cellular action potentials, which can usually be tracked while the electrode is advanced many tens of microns. Another signal source is the field potential. This is typically seen in layered structures and results from the synchronous flow of current into a parallel set of dendrites. This signal is typically of sufficiently low bandwidth that it can be filtered out from the neural actional potentials. The shape of the electrode has some effect on what types of signals are measured. To some extent, the larger the tip of the electrode the greater the number of signals recorded. If the electrode tip is too large it will be impossible to isolate any one particular neuron. If the electrode tip is too small it might be difficult to detect any signal at all. Additionally, the configuration of the tip can be an important factor in determining what signals can be measured. Current in the extracellular space tends to flow in the interstitial space between cells and does not necessarily flow regularly. A glass electrode which has an O-shaped tip may pick up different signals than a glass-coated platinum-iridium electrode with a bullet-shaped tip. As is often the case in neurophysiology, what is best must be determined empirically and even then is not necessarily reliable. For further discussions of issues related to electrical recording, see . The last step in the measurement is to amplify the electrical signal. A simple method of measuring the neural activity can be performed in hardware with a threshold detector, but with modern computers it is possible to analyse the waveform digitally and use algorithmic approaches to spike sorting BIB001 BIB002 . For a review on earlier efforts in this area, see . Previously software spike sorting involved considerable effort to set up and implement, but today the process is much more convenient. Several excellent software packages, many of which are publicly available, can do some of the more sophisticated analyses described here with minimal effort on the part of the user. Furthermore, the increased speed of modern computers makes it possible to use methods that in the past required prohibitive computational expense.
|
A review of methods for spike sorting : the detection and classification of neural action potentials <s> Feature analysis <s> Abstract Description of a method by which action potentials recorded simultaneously can be sorted in a moderate size machine in real-time and on-line. <s> BIB001 </s> A review of methods for spike sorting : the detection and classification of neural action potentials <s> Feature analysis <s> Classification of characteristic neural spike shapes in multi-unit recordings is performed in real time using a reduced feature set. A model of uncorrelated signal-related noise is used to reduce the feature set by choosing a subset of aperiodic samples which is effective for discrimination between signals by a nearest-mean algorithm. Initial signal classes are determined by an unsupervised clustering algorithm applied to the reduced features of the learning set events. Classification is carried out in real time using a distance measure derived for the reduced feature set. Examples of separation and correlation of multiunit activity from cat and frog visual systems are described. <s> BIB002 </s> A review of methods for spike sorting : the detection and classification of neural action potentials <s> Feature analysis <s> A number ofmultiunit neural signal classification techniques are compared in their theoretical separation properties and in their empirical performance in classifying two channel recordings from the ventral nerve cord of the cockroach. The techniques include: the use of amplitude and conduction time measures, template matching, the principal components method, optimal filtering, and maximin discrimination. The noise encountered under different situations is characterized, permitting the comparisons to be made as functions of the experimental conditions. Recommendations are made as to the appropriate use of the techniques. <s> BIB003
|
The traces in figure 3(b) show two clear action potentials that have roughly the same height but are different in shape. If the shape could be characterized, we could use this information to classify each spike. How do we characterize the shape? One approach is to measure features of the shape, such as spike height and width or peak-to-peak amplitude. This is one of the earliest approaches to spike sorting. It was common in these methods to put considerable effort into choosing the minimal set of features that yielded the best discrimination, because computer resources were very limited BIB001 BIB002 . For further discussions of feature analysis techniques, see BIB003 and . In general, the more features we have, the better we will be able to distinguish different spike shapes. Figure 6 (a) is a scatter plot of the maximum versus minimum spike amplitudes for each spike in the waveform used in figure 3(b). On this plot, there is a clear clustering of the two different spike shapes. The cluster positions indicate that the spikes have similar maximum amplitudes, but the minimum amplitudes fall into primarily two regions. The large cluster near the origin reflects both noise and the spikes of background neurons. It is also possible to measure different features, and somewhat better clustering is obtained with the spike height and width, as shown in figure 6(b). The vertical banding reflects the sampling frequency. How do we sort the spikes? A common method is a technique called cluster cutting. In this approach, the user defines a boundary for a particular set of features. If a data point falls within the boundary, it is classified as belonging to that cluster; if it falls outside the boundary, it is discarded. Figure 6 (b) shows an example of boundaries placed around the primary clusters. It should be evident that positioning of the boundaries for optimal classification can be quite difficult if the clusters are not distinct. There is also the same trade-off between false positives and missed spikes as there was for threshold detection, but now in two dimensions. Methods to position the cluster boundaries automatically will be discussed below. In off-line analysis the cluster boundaries are determined after the data have been collected by looking at all (or a sample from) the data over the collection period. This allows the experiment to verify that the spike shapes were stable for the duration of the collection period. Clustering can also be performed on-line (i.e. while the data are being collected) if the clusters are stable. Methods for addressing unstable clusters will be discussed below.
|
A review of methods for spike sorting : the detection and classification of neural action potentials <s> Clustering in higher dimensions and template matching <s> Abstract A heuristic method was developed to identify and to separate automatically unit nerve impulses from a multiunit recording. Up to 20 distinct units can be identified. The method can sequentially decompose superimposed nerve impulses if the rapidly changing region of at least one of them is relatively undistorted. The identification and separation procedure has been successfully applied to the extracellularly recorded neural activity associated with the shadow reflex pathway of the barnacle. The limitations of the procedure are discussed and additional applications of the technique are presented. <s> BIB001 </s> A review of methods for spike sorting : the detection and classification of neural action potentials <s> Clustering in higher dimensions and template matching <s> A system for neural spike detection and classification is presented which does not require a priori assumptions about spike shape or timing. The system is divided into two parts: a learning subsystem and a real-time detection and classification subsystem. The learning subsystem, comprising of feature learning phase and a template learning phase, extracts templates for each separate spike class. The real-time detection and classification subsystems identifies spikes in the noisy neural trace and sorts them into classes, based on the templates and the statistics of the background noise. Comparisons are made among three different schemes for the real-time detection and classification subsystem. Performance of the system is illustrated by using it to classify spikes in segments of neural activity recorded from monkey motor cortex and from guinea pig and ferret auditory cortexes. > <s> BIB002
|
Although convenient for display purposes, there is no reason to restrict the cluster space to two dimensions; the algorithms also work in higher dimensions. Figure 11 shows the results of clustering the whole spike waveform. The waveforms are the class means and correspond to the average spike waveform for each class. By adding more dimensions to the clustering, more information is available which can lead to more accurate classification. Using model spike shapes to classify new action potentials is also called template matching (Capowski 1976, Millecchia and BIB001 . Earlier methods of template matching relied on the user to choose a small set of spikes that would serve as the templates. In clustering procedures, the spike templates are chosen automatically BIB002 . If a Euclidean metric is used to calculate the distance to the template, then this corresponds to nearest-neighbour clustering and assumes a spherical cluster around the template (D'Hollander and Orban 1979). The Bayesian version of classification by spike templates has the advantage that the classification takes into account the variation around the mean spike shape to give the most accurate decision boundaries.
|
A review of methods for spike sorting : the detection and classification of neural action potentials <s> Choosing the number of classes <s> Neuronal noise sources and systematic variability in the shape of a spike limit the ability to sort multiple unit waveforms recorded from nervous tissue into their single neuron constituents. Here we present a procedure to efficiently sort spikes in the presence of noise that is anisotropic, i.e., dominated by particular frequencies, and whose amplitude distribution may be non-Gaussian, such as occurs when spike waveforms are a function of interspike interval. Our algorithm uses a hierarchical clustering scheme. First, multiple unit records are sorted into an overly large number of clusters by recursive bisection. Second, these clusters are progressively aggregated into a minimal set of putative single units based on both similarities of spike shape as well as the statistics of spike arrival times, such as imposed by the refractory period. We apply the algorithm to waveforms recorded with chronically implanted micro-wire stereotrodes from neocortex of behaving rat. Natural extensions of the algorithm may be used to cluster spike waveforms from records with many input channels, such as those obtained with tetrodes and multiple site optical techniques. <s> BIB001 </s> A review of methods for spike sorting : the detection and classification of neural action potentials <s> Choosing the number of classes <s> We discuss Bayesian methods for model averaging and model selection among Bayesian-network models with hidden variables. In particular, we examine large-sample approximations for the marginal likelihood of naive-Bayes models in which the root node is hidden. Such models are useful for clustering or unsupervised learning. We consider a Laplace approximation and the less accurate but more computationally efficient approximation known as the Bayesian Information Criterion (BIC), which is equivalent to Rissanen's (1987) Minimum Description Length (MDL). Also, we consider approximations that ignore some off-diagonal elements of the observed information matrix and an approximation proposed by Cheeseman and Stutz (1995). We evaluate the accuracy of these approximations using a Monte-Carlo gold standard. In experiments with artificial and real examples, we find that (1) none of the approximations are accurate when used for model averaging, (2) all of the approximations, with the exception of BIC/MDL, are accurate for model selection, (3) among the accurate approximations, the Cheeseman–Stutz and Diagonal approximations are the most computationally efficient, (4) all of the approximations, with the exception of BIC/MDL, can be sensitive to the prior distribution over model parameters, and (5) the Cheeseman–Stutz approximation can be more accurate than the other approximations, including the Laplace approximation, in situations where the parameters in the maximum a posteriori configuration are near a boundary. <s> BIB002
|
One of the more difficult aspects of clustering approaches is choosing the number of classes. In Bayesian approaches, it is possible to estimate the probability of each model given the observed data , Gull 1988 BIB002 . If the assumptions of the model are accurate, this will give the relative probabilities of different numbers of classes. This approach was used by in the case of spherical Gaussian mixture models. The software package AutoClass (Cheeseman and Stutz 1996) , used in figures 9 and 11, estimates the relative probabilities for a general (multivariate) Gaussian mixture model. In figure 9 , for example, the probability of the nine-class model was e 160 times greater than the four-class model, suggesting overwhelming evidence in favour of the nine-class model. This procedure selects the most probable number of classes given the data and does not always favour models with more classes. In the same example, the probability of the nine-class model was e 16 times greater than that of an eleven-class model. These numbers are calculated given the assumptions of the model and thus should be considered accordingly, i.e. they are accurate to the extent that the assumptions are valid. Ideally, the number of classes would correspond to the number of neurons being observed, but there are several factors that prevent such a simple interpretation. The parameters of the classes are adapted to fit the distribution of the data. Individual neurons will not necessarily produce spikes that result in a distribution than can be well described by the model. In some special cases, such as with stationary spike shapes and uncorrelated noise, the clusters will be nearly spherical in which case the conclusions drawn from nearestneighbour or symmetric Gaussian models can be very accurate. Many less ideal situations, such as correlated noise or non-stationary spike shapes, can still be accurately modelled by general Gaussian mixture models. In more complicated situations, such as neurons that generate complex bursts or when the background noise is non-stationary, the structure of the data can be very difficult to capture with a mixture model and thus it will be difficult both to predict the number of units and to make accurate classifications. One approach to choosing the number of classes that avoids some of the assumptions of the simple cluster models was suggested by BIB001 . The idea behind this approach is to use the interspike interval histogram to guide decisions about whether a class represents a single unit. When a neuron generates an irregularly shaped spike, such as a bursting neuron, many clustering algorithms often fit the resulting data with two or more classes. The method of BIB001 groups multiple classes according to whether the interspike interval histogram of the group shows a significant number of spikes in the refractory period. Classification is done normally using the whole set of classes, but spikes classified as belonging to any of the classes in a group are labelled as the same unit. If overlapping action potentials are ignored, this approach can give biased results, because discarded overlaps could artificially create an interspike interval histogram that does not violate assumptions of the refractory period. It is important, then, with this approach, to calculate the interspike interval statistics in a region where overlaps can be accurately classified. Another potential drawback of this approach is that, because it relies on constructing interspike interval histograms, long collection periods may be required to achieve the desired degree of statistical certainty.
|
A review of methods for spike sorting : the detection and classification of neural action potentials <s> Filter-based methods <s> A number ofmultiunit neural signal classification techniques are compared in their theoretical separation properties and in their empirical performance in classifying two channel recordings from the ventral nerve cord of the cockroach. The techniques include: the use of amplitude and conduction time measures, template matching, the principal components method, optimal filtering, and maximin discrimination. The noise encountered under different situations is characterized, permitting the comparisons to be made as functions of the experimental conditions. Recommendations are made as to the appropriate use of the techniques. <s> BIB001 </s> A review of methods for spike sorting : the detection and classification of neural action potentials <s> Filter-based methods <s> Describes advanced protocols for the discrimination and classification of neuronal spike waveforms within multichannel electrophysiological recordings. The programs are capable of detecting and classifying the spikes from multiple, simultaneously active neurons, even in situations where there is a high degree of spike waveform superposition on the recording channels. The protocols are based on the derivation of an optimal linear filter for each individual neuron. Each filter is tuned to selectively respond to the spike waveform generated by the corresponding neuron, and to attenuate noise and the spike waveforms from all other neurons. The protocol is essentially an extension of earlier work (S. Andreassen et al., 1979; W.M. Roberts and D.K. Hartline, 1975; R.B. Stein et al., 1979). However, the protocols extend the power and utility of the original implementations in two significant respects. First, a general single-pass automatic template estimation algorithm was derived and implemented. Second, the filters were implemented within a software environment providing a greatly enhanced functional organization and user interface. The utility of the analysis approach was demonstrated on samples of multiunit electrophysiological recordings from the cricket abdominal nerve cord. > <s> BIB002
|
Another approach to spike sorting uses the methods of optimal filtering BIB002 . The idea behind this approach is to generate a set of filters that optimally discriminate a set of spikes from each other and from the background noise. This method assumes that both the noise power spectrum and the spike shapes can be estimated accurately. For each spike model, a filter is constructed that responds maximally to the spike shape of interest and minimally to the background noise, which may include the spikes of other units. The neural waveform is then convolved with the set of filters and spikes are classified according to which filter generates the largest response. This is analogous to the clustering methods above in which the metric used to calculate distance to the class mean is inversely related to the filter output. If the filters used to model the spike shapes were orthogonal, this would also be able to handle overlapping action potentials, but in practice this is rarely the case. Comparisons of spike classification on a common data set were carried out by BIB001 who found that optimal filtering methods did not classify as accurately as feature clustering using principal components or template matching.
|
A review of methods for spike sorting : the detection and classification of neural action potentials <s> Overlapping spikes <s> This paper develops the multidimensional binary search tree (or k -d tree, where k is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The k -d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an n record file are: insertion, O (log n ); deletion of the root, O ( n ( k -1)/ k ); deletion of a random node, O (log n ); and optimization (guarantees logarithmic performance of searches), O ( n log n ). Search algorithms are given for partial match queries with t keys specified [proven maximum running time of O ( n ( k - t )/ k )] and for nearest neighbor queries [empirically observed average running time of O (log n ).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that k -d trees could be quite useful in many applications, and examples of potential uses are given. <s> BIB001 </s> A review of methods for spike sorting : the detection and classification of neural action potentials <s> Overlapping spikes <s> An essential step in studying nerve cell interaction during information processing is the extracellular microelectrode recording of the electrical activity of groups of adjacent cells. The recording usually contains the superposition of the spike trains produced by a number of neurons in the vicinity of the electrode. It is therefore necessary to correctly classify the signals generated by these different neurons. This problem is considered, and a new classification scheme is developed which does not require human supervision. A learning stage is first applied on the beginning portion of the recording to estimate the typical spike shapes of the different neurons. As for the classification stage, a method is developed which specifically considers the case when spikes overlap temporally. The method minimizes the probability of error, taking into account the statistical properties of the discharges of the neurons. The method is tested on a real recording as well as on synthetic data. > <s> BIB002 </s> A review of methods for spike sorting : the detection and classification of neural action potentials <s> Overlapping spikes <s> Fast search algorithms are proposed and studied for vector quantization encoding using the K-dimensional (K-d) tree structure. Here, the emphasis is on the optimal design of the K-d tree for efficient nearest neighbor search in multidimensional space under a bucket-Voronoi intersection search framework. Efficient optimization criteria and procedures are proposed for designing the K-d tree, for the case when the test data distribution is available (as in vector quantization application in the form of training data) as well as for the case when the test data distribution is not available and only the Voronoi intersection information is to be used. The criteria and bucket-Voronoi intersection search procedure are studied in the context of vector quantization encoding of speech waveform. They are empirically observed to achieve constant search complexity for O(log N) tree depths and are found to be more efficient in reducing the search complexity. A geometric interpretation is given for the maximum product criterion, explaining reasons for its inefficiency with respect to the optimization criteria. > <s> BIB003 </s> A review of methods for spike sorting : the detection and classification of neural action potentials <s> Overlapping spikes <s> Neuronal noise sources and systematic variability in the shape of a spike limit the ability to sort multiple unit waveforms recorded from nervous tissue into their single neuron constituents. Here we present a procedure to efficiently sort spikes in the presence of noise that is anisotropic, i.e., dominated by particular frequencies, and whose amplitude distribution may be non-Gaussian, such as occurs when spike waveforms are a function of interspike interval. Our algorithm uses a hierarchical clustering scheme. First, multiple unit records are sorted into an overly large number of clusters by recursive bisection. Second, these clusters are progressively aggregated into a minimal set of putative single units based on both similarities of spike shape as well as the statistics of spike arrival times, such as imposed by the refractory period. We apply the algorithm to waveforms recorded with chronically implanted micro-wire stereotrodes from neocortex of behaving rat. Natural extensions of the algorithm may be used to cluster spike waveforms from records with many input channels, such as those obtained with tetrodes and multiple site optical techniques. <s> BIB004 </s> A review of methods for spike sorting : the detection and classification of neural action potentials <s> Overlapping spikes <s> Determination of single-unit spike trains from multiunit recordings obtained during extracellular recording has been the focus of many studies over the last two decades. In multiunit recordings, superpositions can occur with high frequency if the firing rates of the neurons are high or correlated, making superposition resolution imperative for accurate spike train determination. In this work, a connectionist neural network (NN) was applied to the spike sorting challenge. A novel training scheme was developed which enabled the NN to resolve some superpositions using single-channel recordings. Simulated multiunit spike trains were constructed from templates and noise segments that were extracted from real extracellular recordings. The simulations were used to determine the performances of the NN and a simple matched template filter (MTF), which was used as a basis for comparison. The network performed as well as the MTF in identifying nonoverlapping spikes, and was significantly better in resolving superpositions and rejecting noise. An on-line, real-time implementation of the NN discriminator, using a high-speed digital signal processor mounted inside an IBM-PC, is now in use in six laboratories. <s> BIB005
|
None of the methods described above explicitly deals with overlapping spikes. If two spikes are sufficiently separated in time, it is possible that the aforementioned methods will make the correct classification, but all of the methods reviewed thus far degrade severely when two spikes fire simultaneously. With the cluster cutting and Bayesian approaches to classification, it is possible to detect 'bad' overlaps as outliers. This gives the experimenter some gauge of the relative frequency of these events and whether they would compromise the results. There are many situations, however, where it would be desirable to both detect and accurately classify overlapping action potentials, e.g. investigations of local circuits or studies of spike-timing codes. One simple approach to overlaps is to subtract a spike from the waveform once it is classified, in the hope that this will improve the classification of subsequent spikes. This approach requires a model of the spike shape (or template). It yields reasonable results 0 1 2 3 4 msec Figure 13 . A high degree of overlap makes it difficult to identify the component spikes. when two spikes are separated well enough so that the first can be accurately classified, but fails when the spikes are closer together like those shown in figure 13. Another problem with this approach is that the subtraction can introduce more noise in the waveform if the spike model is not accurate. Another approach to the problem of overlaps is use neural networks to learn more general decision boundaries (Jansen 1990, Chandra and BIB005 . BIB005 reported that a trained neural network performed as well as a matched filter for classifying non-overlapping action potentials and showed improved performance for overlapping action potentials. A serious drawback of these approaches, however, is that the network must be trained using labelled spikes; thus the decision boundaries that are learned can only be as accurate as the initial labelling. Like the subtraction methods, these methods can only identify overlaps that have identifiable peaks. One potential problem with subtraction-based approaches is that it is possible to introduce spurious spike-like shapes if the spike occurrence time is not accurately estimated. Typically, spike occurrence time is estimated to a resolution of one sample period, but often this is not sufficient to prevent artifacts in the residual waveform due to misalignment of the spike model with the measured waveform. The minimal precision of the time alignment can be surprisingly small, often a fraction of a sample period. gave the following equation for error introduced by spike-time misalignment. For a given spike model, s(t), the maximum error resulting from a misalignment of δ is For values of equal to the RMS noise level, typical δ's range from 0.1 to 0.5 sample periods. This equation gives a discrete number of time-alignment positions that must be checked in order to ensure an error less than . A more robust approach for decomposing overlaps was proposed by BIB002 . When an overlap is detected, this approach compares all possible combinations of two spike models over a short range of spike occurrence times to find the combination with the highest likelihood. This approach can identify exact overlaps, but has the drawback of being computationally expensive, particularly for higher numbers of overlaps. introduced a computationally efficient overlap decomposition algorithm that addressed many of these problems. The idea behind the algorithm is to construct a special data structure that can be used for classification, time alignment, and overlap decomposition, all at the same time and with minimal computational expense. The data structure is constructed from the spike shapes, once they are determined, and then used repeatedly for classification and decomposition when new spikes are detected. In clustering and template-matching approaches, the largest computational expense is in comparing the observed spike to each of the clusters or templates. For single action potentials, this expense is insignificant, but if overlaps are considered then comparisons must be made for each spike combination (pair, triple, etc) and for all the alignments of the spikes with respect to each other. This quickly results in far too many possibilities to consider in a reasonable amount of time. It is possible, however, to avoid the expense of comparing the data from the observed spike to all possible cases. Using data structures and search techniques from computer science, it is possible to organize the cases in the data structure so that the spike combinations that most likely account for the data can be identified very quickly. The algorithm of used k-dimensional search trees BIB001 ) to quickly search the large space of possible combinations of spike shapes that could account for a given spike waveform. k-dimensional trees are a multidimensional extension of binary trees, where k refers to the number of dimensions in the cluster means. A simple method of constructing this type of search tree is to identify the dimension in the data with the largest variation and divide the data at the median of this dimension. This divides the data in two and determines the first division in the tree. This procedure is applied recursively to each subdivision until the tree extends down to the level of single data points. Each node in the tree stores what dimension is cut and the position of that cut. There are many algorithms for constructing these trees that use different strategies to ensure that the nodes in the tree are properly balanced BIB003 . To find the closest match in the tree to a new data point, the tree is traversed from top to bottom, comparing the appropriate dimensions at cutting points at each node. If each node divides on half of the subregion, then the closest match will be found in an average of O(log 2 N) comparisons, where N is the number of data points stored in the tree. In the case of spike sorting, these would correspond to the means of the models. To construct the tree requires O(N log 2 N) time, but once it is set up, each nearest-neighbour search is very fast. For purposes of calculating the relative probabilities of different clusters, it is not sufficient just to identify the closest cluster. For this calculation, it is necessary to find all the means within a certain radius (which is proportional to the background noise level) of the data point, and there are also algorithms that can perform this search using k-dimensional trees in O(log 2 N) time (Friedman et al 1977, Ramasubramanian and BIB003 . The overlap decomposition algorithm of is illustrated in figure 14 . The first step is to find a peak in the extracellular waveform. A region around the peak (indicated by the dashed lines) is selected. These data are classified with the k-dimensional tree which returns a list of spike sequence models and their relative probabilities. Each sequence model is a list of spikes (possibly only a single spike) and temporal positions relative to the waveform peak. The residual waveform (the raw waveform minus the model) of each remaining model is expanded until another peak is found. The relative probabilities of each sequence are recalculated using the entire waveform so far and the improbable sequences (e.g. probability < 0.001) are discarded. The cycle is repeated recursively, again using the k-dimensional tree to classify the waveform peak in each residual waveform. The algorithm terminates when no more peaks are found over the duration of the component spike models. The algorithm returns with list of spike model sequences and their relative probabilities. It is fast enough that it can be performed in real time with only modest computing requirements. Step 1 of the algorithm identifies all plausible spike models in a peak region (indicated by the dashed lines). Step 2 expands each spike model sequence in the current list until another peak is found in the residual error. Step 3 calculates the likelihood of each spike model sequence and prunes from the sequence list all those that are improbable. The steps are repeated on the next peak region in the residual waveform for each sequence in the list. The algorithm returns a list of spike sequences and their relative probabilities. (b, c, d) Three different decompositions of the same waveform. The decomposition in (b) is twice as probable as that in (c) even though the latter fits the data better. The decomposition in (d) has small probability, because it does not fit the data sufficiently well. One unforeseen consequence of a good decomposition algorithm is that there are actually many different ways to account for the same waveform. This is essentially the same overfitting problem that was mentioned in section 4.7: models with more degrees of freedom achieve better fits to the data, but can also result in less accurate predictions. This problem was addressed in Lewicki's algorithm by using Bayesian methods to determine which sequence is the most probable. For example, the four-spike sequence in figure 14(b) has twice the probability of the six-spike sequence in figure 14(c) , even though the latter fits the data better (i.e. has less residual error). This approach does not make use of spike firing times, but in principle this information could also be incorporated into the calculation. One limitation of this approach is that it assumes the clusters are spherical, which is equivalent to assuming a fixed spike shape with uncorrelated Gaussian noise. One approach for using this decomposition algorithm with non-spherical clusters is to use the approach of BIB004 which uses several spherical clusters to approximate more general types of cluster shapes.
|
A review of methods for spike sorting : the detection and classification of neural action potentials <s> Multiple electrodes <s> A new method is described for the recording and discrimination of extracellular action potentials in CNS regions with high cellular packing density or where there is intrinsic variation in action potential amplitude during burst discharge. The method is based on the principle that cells with different ratios of distances from two electrode tips will have different spike-amplitude ratios when recorded on two channels. The two channel amplitude ratio will remain constant regardless of intrinsic variation in the absolute amplitude of the signals. The method has been applied to the rat hippocampal formation, from which up to 5 units have been simultaneously isolated. The construction of the electrodes is simple, relatively fast, and reliable, and their low tip impedances result in excellent signal to noise characteristics. <s> BIB001 </s> A review of methods for spike sorting : the detection and classification of neural action potentials <s> Multiple electrodes <s> The majority of techniques for separating multiple single-unit spike trains from a multi-unit recording rely on the assumption that different cells exhibit action potentials having unique amplitudes and waveforms. When this assumption fails, due to the similarity of spike shape among different cells or to the presence of complex spikes with declining intra-burst amplitude, these methods lead to errors in classification. In an effort to avoid these errors, the stereotrode (McNaughton et al., 1983) and later the tetrode (O'Keefe and Reece, 1993; Wilson and McNaughton, 1993) recording techniques were developed. Because the latter technique has been applied primarily to the hippocampus, we sought to evaluate its performance in the neocortex. Multi-unit recordings, using single tetrodes, were made at 28 sites in area 17 of 3 anesthetized cats. Neurons were activated with moving bars and square wave gratings. Single units were separated by identification of clusters in 2-D projections of either peak-to-peak amplitude, spike width, spike area, or the 1st versus 2nd principal components of the waveforms recorded on each channel. Using tetrodes, we recorded a total of 154 single cells (mean = 5.4, max = 9). By cross-checking the performance of the tetrode with the stereotrode and electrode, we found that the best of the 6 possible stereotrode pairs and the best of 4 possible electrodes from each tetrode yielded 102 (mean = 3.6, max = 7) and 95 (mean = 3.4, max = 6) cells, respectively. Moreover, we found that the number of cells isolated at each site by the tetrode was greater than the stereotrode or electrode in 16/28 and 28/28 cases, respectively. Thus, both stereotrodes, and particularly electrodes, often lumped 2 or more cells in a single cluster that could be easily separated by the tetrode. We conclude that tetrode recording currently provides the best and most reliable method for the isolation of multiple single units in the neocortex using a single probe. <s> BIB002
|
There are many situations when two different neurons generate action potentials having very similar shapes in the recorded waveform. This happens when the neurons are similar in morphology and about equally distant from the recording electrode. One approach to circumventing this problem is to record from multiple electrodes in the same local area BIB001 . The idea is that if two recording electrodes are used, pairs of cells will be less likely to be equidistant from both electrodes (stereotrodes). This idea can be extended further to have four electrodes (tetrodes) that can provide four separate measurements of neural activity in the local area BIB001 , Wilson and McNaughton 1993 . Under the assumption that the extracellular space is electrically homogeneous, four electrodes provide the minimal number necessary to identify the spatial position of a source based only on the relative spike amplitudes on different electrodes. Having multiple recordings of the same unit from different physical locations allows additional information to be used for more accurate spike sorting. This can also reduce the problem of overlapping spikes: what appears as an overlap on one channel might be a isolated unit on another. BIB002 used recordings made with tetrodes in cat visual cortex to compare the performance of tetrodes with the best electrode pair and best single electrode. Using two-dimensional feature clustering, the tetrode recordings yielded an average of 5.4 isolated cells per site compared to 3.6 cells per site and 3.4 cells per site for the best electrode pair and best single electrode, respectively. Publicly available software for hand-clustering of tetrode data has been described by . Many of the spike-sorting techniques developed for single electrodes extend naturally to multiple electrodes. Principal component analysis can also be performed on multipleelectrode channels to obtain useful features for clustering. In this case, the principal components describe the directions of maximum variation on all channels simultaneously. It is also possible to do clustering of all the channels simultaneously using the raw waveforms. Here each cluster mean represents the mean spike shape as it appears on each of the channels. have applied Bayesian clustering methods to tetrode data which show improved classification accuracy compared to two-dimensional clustering methods.
|
A review of methods for spike sorting : the detection and classification of neural action potentials <s> Summary <s> A number ofmultiunit neural signal classification techniques are compared in their theoretical separation properties and in their empirical performance in classifying two channel recordings from the ventral nerve cord of the cockroach. The techniques include: the use of amplitude and conduction time measures, template matching, the principal components method, optimal filtering, and maximin discrimination. The noise encountered under different situations is characterized, permitting the comparisons to be made as functions of the experimental conditions. Recommendations are made as to the appropriate use of the techniques. <s> BIB001 </s> A review of methods for spike sorting : the detection and classification of neural action potentials <s> Summary <s> The majority of techniques for separating multiple single-unit spike trains from a multi-unit recording rely on the assumption that different cells exhibit action potentials having unique amplitudes and waveforms. When this assumption fails, due to the similarity of spike shape among different cells or to the presence of complex spikes with declining intra-burst amplitude, these methods lead to errors in classification. In an effort to avoid these errors, the stereotrode (McNaughton et al., 1983) and later the tetrode (O'Keefe and Reece, 1993; Wilson and McNaughton, 1993) recording techniques were developed. Because the latter technique has been applied primarily to the hippocampus, we sought to evaluate its performance in the neocortex. Multi-unit recordings, using single tetrodes, were made at 28 sites in area 17 of 3 anesthetized cats. Neurons were activated with moving bars and square wave gratings. Single units were separated by identification of clusters in 2-D projections of either peak-to-peak amplitude, spike width, spike area, or the 1st versus 2nd principal components of the waveforms recorded on each channel. Using tetrodes, we recorded a total of 154 single cells (mean = 5.4, max = 9). By cross-checking the performance of the tetrode with the stereotrode and electrode, we found that the best of the 6 possible stereotrode pairs and the best of 4 possible electrodes from each tetrode yielded 102 (mean = 3.6, max = 7) and 95 (mean = 3.4, max = 6) cells, respectively. Moreover, we found that the number of cells isolated at each site by the tetrode was greater than the stereotrode or electrode in 16/28 and 28/28 cases, respectively. Thus, both stereotrodes, and particularly electrodes, often lumped 2 or more cells in a single cluster that could be easily separated by the tetrode. We conclude that tetrode recording currently provides the best and most reliable method for the isolation of multiple single units in the neocortex using a single probe. <s> BIB002 </s> A review of methods for spike sorting : the detection and classification of neural action potentials <s> Summary <s> Neuronal noise sources and systematic variability in the shape of a spike limit the ability to sort multiple unit waveforms recorded from nervous tissue into their single neuron constituents. Here we present a procedure to efficiently sort spikes in the presence of noise that is anisotropic, i.e., dominated by particular frequencies, and whose amplitude distribution may be non-Gaussian, such as occurs when spike waveforms are a function of interspike interval. Our algorithm uses a hierarchical clustering scheme. First, multiple unit records are sorted into an overly large number of clusters by recursive bisection. Second, these clusters are progressively aggregated into a minimal set of putative single units based on both similarities of spike shape as well as the statistics of spike arrival times, such as imposed by the refractory period. We apply the algorithm to waveforms recorded with chronically implanted micro-wire stereotrodes from neocortex of behaving rat. Natural extensions of the algorithm may be used to cluster spike waveforms from records with many input channels, such as those obtained with tetrodes and multiple site optical techniques. <s> BIB003
|
Which method is best? An early comparison of feature-based methods was done by BIB001 who concluded that template-matching methods yielded the best classification accuracy compared to spike-shape features, principal components, and optimal filters. compared template-based, Bayesian clustering and classification to the commercial package Brainwaves, which relied on the user to define the two-dimensional cluster boundaries by hand. The methods gave similar results for well separated clusters, but the Bayesian methods were much more accurate for spike shapes that were similar. Template-based methods can fail for neurons that burst and can become increasingly inaccurate if there is electrode drift. The cluster grouping method of BIB003 gives better classification in this situation compared to template-based methods. For overlapping action potentials, the method of was shown to be nearly 100% accurate for action potentials that are significantly above the noise level. Perhaps the most promising of recent methods for measuring the activity of neural populations is not an algorithm, but the recording technique of using multiple electrodes. Many of the difficult problems encountered with single-electrode recording vanish with multiple-electrode recordings. BIB002 showed that tetrodes yielded about two more isolated cells per site compared to the best single electrode, when the data were sorted with a simple two-dimensional clustering procedure. Bayesian clustering and classification shows promise to improve this yield even more BIB003 . A more practical question might be: what is the simplest method that satisfies experimental demands? For many researchers this is still a single electrode with threshold detection. Although simple, this technique can be time consuming and biased. Not only can neurophysiologists waste hours searching for well isolated cells, but in the end this search is biased towards cells that produce the largest action potentials which may not be representative of the entire population. Software spike sorting can reduce these biases, but this approach is still not in widespread use because of the difficulty in implementing even the simplest algorithms and also the added time required for obtaining more data. With modern computers and software this is no longer the case. If the raw waveform data can be transferred to the computer for software analysis, many of the algorithms described here can be implemented with simple programs using software packages such as Matlab, Octave, or Mathematica. In the past several years, there has been much progress in spike sorting. It is now possible to replace much of the decision making and user interaction requirements of older methods with more accurate automated algorithms. There are still many problems that limit the robustness of many of the current methods. These include those discussed in section 7, such as non-stationary background noise, electrode drift and proper spike alignment. Possibly the most restrictive assumption of most methods is the assumption of stationary spike shapes. The method of BIB003 addresses this problem to some extent, as can methods that use multiple electrodes, but currently there are no methods that can accurately classify highly overlapping groups of bursting action potentials. Decomposing overlapping action potentials with non-stationary shapes is largely an unsolved problem. Techniques that use multiple electrodes and incorporate both spike shape and spike timing information show promise in surmounting this problem.
|
A Survey of Formal Verification for Business Process Modeling <s> Introduction <s> Diese Arbeit befasst sich mit den begrifflichen Grundlagen einer Theorie der Kommunikation. Die Aufgabe dieser Theorie soll es sein, moglichst viele Erscheinungen bei der Informationsubertragung und Informationswandlung in einheitlicher und exakter Weise zu beschreiben. ::: The theory of automata is shown not capable of representing the actual physical flow of information in the solution of a recursive problem. The argument proceeds as follows: ::: 1. We assume the following postulates: ::: a) there exists an upper bound on the speed of signals; ::: b) there exists an upper bound on the density with which information can be stored. ::: ::: 2. Automata of fixed, finite size can recognize, at best, only iteratively defined classes of input sequences. (See Kleene (11) and Copi, Elgot, and Wright (8).) ::: ::: 3. Recursively defined classes of input sequences that cannot be defined iteratively can be recognized only by automata of unbounded size. ::: ::: 4. In order for an automaton to solve a (soluble) recursive problem, the possibility must be granted that it can be extended unboundedly in whatever way might be required. ::: ::: 5. Automata (as actual hardware) formulated in accordance with automata theory will, after a finite number of extensions, conflict with at least one of the postulates named above. ::: Suitable conceptual structures for an exact theory of communication are then discussed, and a theory of communication proposed. ::: All of the really useful results of automata theory may be expressed by means of these new concepts. Moreover, the results retain their usefulness and the new nrocedure has definite advantages over the older ones. ::: The proposed representation differs from each of the presently known theories concerning information on at least one of the following essential points: ::: 1. The existence of a metric is assumed for either space nor time nor for other physical magnitudes. ::: 2. Time is introduced as a strictly local relation between states. ::: 3. The objects of the theory are discrete, and they are combined and produced only by means of strictly finite techniques. ::: ::: The following conclusions drawn from the results of this work may be cited as of some practical interest: ::: 1. The tolerance requirements for the response characteristics of computer components can be substantially weakened if the computer is suitably structured. ::: 2. It is possible to design computers structurally in such a way that they are asynchronous, all parts operating in parallel, and can be extended arbitrarily without interrupting their computation. ::: 3. For complicated organizational processes of any given sort the theory yields a means of representation that with equal rigor and simplicity accomplishes more than the theory of synchronous automata. <s> BIB001 </s> A Survey of Formal Verification for Business Process Modeling <s> Introduction <s> Glossary Part I. Communicating Systems: 1. Introduction 2. Behaviour of automata 3. Sequential processes and bisimulation 4. Concurrent processes and reaction 5. Transitions and strong equivalence 6. Observation equivalence: theory 7. Observation equivalence: examples Part II. The pi-Calculus: 8. What is mobility? 9. The pi-calculus and reaction 10. Applications of the pi-calculus 11. Sorts, objects and functions 12. Commitments and strong bisimulation 13. Observation equivalence and examples 14. Discussion and related work Bibliography Index. <s> BIB002
|
Recently, enterprise information systems are designed based on service-oriented architecture. The solution against the changing business environment is construction of flexible business processes, which is the core of enterprise information systems development. It is common knowledge that business process modeling (BPM) is effective for the development. Developers can generally model business processes with modeling notation, e.g., BPMN , activity diagrams of UML . The diagram, modeled with the notation, is simple and intuitively understandable at a glance. The notation is also designed so that anyone can easily model. Moreover, the notation is closely relevant to web services; the diagram can be converted into the BPEL XML format . However, work of general modeling includes arbitrariness and lacks strictness. A diagram modeled with the notation may have various interpretations and one or more different diagrams may denote one process. Thus, before utilizing BPM, we must define strict semantics of the models and verify formally them. There have been many efforts that validate strictness of the diagrams; automation tools which can debug grammatical errors of BPMN and convert diagrams into BPEL , formal methods for verifying diagrams based on the π calculus BIB002 or Petri Net BIB001 , techniques proving consistency with model-checking , and so on. In this paper we present a survey of existing proposals for formal verification techniques of business process diagrams and compare them among each other with respect to motivations, methods, and logics. We also discuss some conclusive considerations and our direction for future work. The most important purpose of BPM is to yield a profit for enterprise after the business reform. Thus, we should verify the profit which is generated by the model. In this paper we also discuss the value of business process models and properties for the evaluation. We hope the survey contributes to designers and developers of enterprise information systems for solving issues on their section and satisfying the industrial needs.
|
A Survey of Formal Verification for Business Process Modeling <s> Automata <s> This book is a rigorous exposition of formal languages and models of computation, with an introduction to computational complexity. The authors present the theory in a concise and straightforward manner, with an eye out for the practical applications. Exercises at the end of each chapter, including some that have been solved, help readers confirm and enhance their understanding of the material. This book is appropriate for upper-level computer science undergraduates who are comfortable with mathematical arguments. <s> BIB001 </s> A Survey of Formal Verification for Business Process Modeling <s> Automata <s> In this paper we show how we can translate Web Services described by WS-CDL into a timed automata orchestration, and more specifically we are interested in Web services with time restrictions. Our starting point are Web Services descriptions written in WSBPEL- WSCDL (XML-based description languages). These descriptions are then automatically translated into timed automata, and then, we use a well known tool that supports this formalism (UPPAAL) to simulate and analyse the system behaviour. As illustration we take a particular case study, an airline ticket reservation system. <s> BIB002 </s> A Survey of Formal Verification for Business Process Modeling <s> Automata <s> Recently, a promising programming model called Orc has been proposed to support a structured way of orchestrating distributed web services. Orc is intuitive because it offers concise constructors to manage concurrent communication, time-outs, priorities, failure of sites or communication and so forth. The semantics of Orc is also precisely defined. However, there is no verification tool available to verify critical properties against Orc models. Instead of building one from scratch, we believe the existing mature model-checkers can be reused. In this work, we first define a Timed Automata semantics for the Orc language, which we prove is semantically equivalent to the original operational semantics of Orc. Consequently, Timed Automata models are systematically constructed from Orc models. The practical implication of the construction is that tool supports for Timed Automata, e.g., UPPAAL, can be used to model check Orc models. An experimental tool is implemented to automate our approach. <s> BIB003
|
Automata are a public and base model of formal specifications for systems BIB001 . An automaton consists of a set of states, actions, transitions between states, and an initial state. Labels denote the transition from one state to another. Many specification models to express system behavior derive from automata. In the reference , the authors propose a framework to analyze and verify properties of BPMN diagrams converted into the BPEL format that communicate via asynchronous XML messages. The framework first converts the processes to a particular type of automata whose every transition is equipped with a guard in an XPath format, after which these guarded automata are translated into Promela (Process or Protocol Meta Language) for the SPIN model checker . Consequently, SPIN can be used to verify whether business process models satisfy properties formalized in LTL (Linear Time Temporal Logic). In the reference BIB002 , the authors show a case study to convert automatically business processes written in BPEL-WSCDL to timed automata and to verify subsequently them by the UPPAAL model checker [49] . The authors are currently implementing a tool for the automatic translation that utilizes UPPAAL. In the reference BIB003 , the authors propose a framework to verify automatically business processes that are modelled in Orc . The authors define a formal timedautomata semantics for Orc expressions, which verifies to the Orc's operational semantics. Accordingly, one can verify formally Orc models with UPPAAL. The paper also shows a simple case study. Thus, to verify business process diagrams the efforts utilizing automata convert the diagrams to XML formats (e.g., BPEL, XPDL, WS-CDL, Orc) for the present. After that, they accommodate automata to XML formats and then model-checking tools can verify them (Figure 1 ). Besides the above automata models, team automata and I/O automata may be helpful for the verification. Team automata allow one to specify separately the components of a system, to describe their interactions, and to reuse the components. Their advantage is a flexible description for communication services among distributed systems, extending I/O automata. This advantage enables team automata to describe the formal model of secure web service compositions.
|
A Survey of Formal Verification for Business Process Modeling <s> Petri Net <s> This paper suggests that input and output are basic primitives of programming and that parallel composition of communicating sequential processes is a fundamental program structuring method. When combined with a development of Dijkstra's guarded command, these concepts are surprisingly versatile. Their use is illustrated by sample solutions of a variety of familiar programming exercises. <s> BIB001 </s> A Survey of Formal Verification for Business Process Modeling <s> Petri Net <s> We present an axiom system ACP, for communicating processes with silent actions ('z-steps'). The system is an extension of ACP, Algebra of Communicating Processes, with Milner's z-laws and an explicit abstraction operator. By means of a model of finite acyclic process graphs for ACP, syntactic properties such as consistency and conservativity over ACP are proved. Furthermore, the Expansion Theorem for ACP is shown to carry over to ACP~. Finally, termination of rewriting terms according to the ACP~ axioms is proved using the method of recursive path orderings. <s> BIB002 </s> A Survey of Formal Verification for Business Process Modeling <s> Petri Net <s> Web services -- Web-accessible programs and devices - are a key application area for the Semantic Web. With the proliferation of Web services and the evolution towards the Semantic Web comes the opportunity to automate various Web services tasks. Our objective is to enable markup and automated reasoning technology to describe, simulate, compose, test, and verify compositions of Web services. We take as our starting point the DAML-S DAML+OIL ontology for describing the capabilities of Web services. We define the semantics for a relevant subset of DAML-S in terms of a first-order logical language. With the semantics in hand, we encode our service descriptions in a Petri Net formalism and provide decision procedures for Web service simulation, verification and composition. We also provide an analysis of the complexity of these tasks under different restrictions to the DAML-S composite services we can describe. Finally, we present an implementation of our analysis techniques. This implementation takes as input a DAML-S description of a Web service, automatically generates a Petri Net and performs the desired analysis. Such a tool has broad applicability both as a back end to existing manual Web service composition tools, and as a stand-alone tool for Web service developers. <s> BIB003 </s> A Survey of Formal Verification for Business Process Modeling <s> Petri Net <s> Web services composition is an emerging paradigm for application integration within and across organizational boundaries. A landscape of languages and techniques for web services composition has emerged and is continuously being enriched with new proposals from different vendors and coalitions. However, little effort has been dedicated to systematically evaluate the capabilities and limitations of these languages and techniques. The work reported in this paper is a step in this direction. It presents an in-depth analysis of the Business Process Execution Language for Web Services (BPEL4WS) with respect to a framework composed of workflow and communication patterns. <s> BIB004 </s> A Survey of Formal Verification for Business Process Modeling <s> Petri Net <s> The Internet is going through several major changes. It has become a vehicle of Web services rather than just a repository of information. Many organizations are putting their core business competencies on the Internet as a collection of Web services. An important challenge is to integrate them to create new value-added Web services in ways that could never be foreseen forming what is known as Business-to-Business (B2B) services. Therefore, there is a need for modeling techniques and tools for reliable Web service composition. In this paper, we propose a Petri net-based algebra, used to model control flows, as a necessary constituent of reliable Web service composition process. This algebra is expressive enough to capture the semantics of complex Web service combinations. <s> BIB005 </s> A Survey of Formal Verification for Business Process Modeling <s> Petri Net <s> Web services aim to support efficient integration of applications over Web. Most Web services are stateful, such as services for business processes, and they converse with each other via properly ordered interactions, instead of individual unrelated invocations. In order to address efficient integration of conversational Web services, we create a unified specification model for both conversation protocol and composition; we propose methods to integrate a partner service with complex conversation protocol into a composition of Web services; assure the correctness of composition by formal verification. The mapping between our model and BPEL4WS is also discussed. <s> BIB006 </s> A Survey of Formal Verification for Business Process Modeling <s> Petri Net <s> The emerging paradigm of Web services opens a new way of Web application design and development to quickly develop and deploy Web applications by integrating independently published Web services components to conduct new business transactions. As research aiming at facilitating Web services integration and verification, WS-Net is an executable architectural description language incorporating the semantics of colored Petri-net with the style and understandability of object-oriented concepts. WS-Net describes each Web services component in three layers: interface net declares the services that the component provides to other components; interconnection net specifies the services that the component acquires to accomplish its mission; and interoperation net describes the internal operational behaviors of the component. As an architectural model that formalizes the architectural topology and behaviors of each Web services component as well as the entire system, WS-Net facilitates the verification and monitoring of Web services integration. <s> BIB007 </s> A Survey of Formal Verification for Business Process Modeling <s> Petri Net <s> Whether two web services are compatible depends not only on static properties like the correct typing of their message parameters, but also on their dynamic behaviour. Providing a simple description of the service behaviour based on process-algebraic or automata-based formalisms can help detecting many subtle incompatibilities in their interaction. Moreover, this compatibility checking can to a large extent be automated if we define the notion of compatibility in a sufficiently formal way. Based on a simple behavioural representation, we survey, propose and compare a number of formal definitions of the compatibility notion, and we illustrate them on simple examples. <s> BIB008 </s> A Survey of Formal Verification for Business Process Modeling <s> Petri Net <s> The Business Process Execution Language for Web Service, known as BPEL4WS, more recently as WS-BPEL (or BPEL for short) [1], is a process definition language geared towards Service-Oriented Computing (SOC) and layered on top of the Web services technology stack. In BPEL, the logic of the interactions between a given service and its environment is described as a composition of communication actions. These communication actions are interrelated by control-flow dependencies expressed through constructs close to those found in workflow definition languages. In particular, BPEL incorporates two sophisticated branching and synchronisation constructs, namely “control links” and “join conditions”, which can be found in a class of workflow models known as synchronising workflows formalised in terms of Petri nets in [3]. <s> BIB009 </s> A Survey of Formal Verification for Business Process Modeling <s> Petri Net <s> We present a Petri net semantics for the Business Process Execution Language for Web Services (BPEL). Our semantics covers the standard behaviour of BPEL as well as the exceptional behaviour (e.g. faults, events, compensation). The semantics is implemented as a parser that translates BPEL specifications into the input language of the Petri net model checking tool LoLA. We demonstrate that the semantics is well suited for computer aided verification purposes. <s> BIB010
|
Petri Net is a framework to model concurrent systems. Petri Net can identify many basic aspects of concurrent systems simply, mathematically and conceptually. Therefore, many theories of concurrent systems derive from Petri Net. Moreover, because Petri Net has easily understandable and graphical notation, it has been widely applied. Petri Net often become a topic in BPM and is related to capturing process control flows BIB004 . Petri Net can specially detect the dead path of business process models whose preconditions are not satisfied. The paper shows how to correspond all BPMN diagrams constructs into labeled Petri Net. This output can subsequently be used to verify BPEL processes by the open source tools BPEL2PNML and WofBPEL BIB009 . In the reference BIB003 , the authors define the semantics of relation BPEL and OWL-S [51] in terms of first-order logic. Based on this semantics they formalize business processes in Petri Net, complete with an operational semantics. They also develop a tool to describe and automatically verify composition of business processes. In the reference BIB005 , the authors apply a Petri-net-based algebra to modeling business processes, based on control flows. The paper BIB006 proposes a Petri-net-based design and verification tool for web service composition. The tool can visualize, create, and verify business processes. The authors are now improving the graphical user interface which can be used to aid the business process modeling and to edit Petri Net and BPEL in a lump. The paper BIB007 introduce a Petri-net-based architectural description language, named WS-Net, in which web-service-oriented systems can be modeled, and presents a simple example. To handle real applications and to detect errors in business processes, the authors are currently developing an automatic translation tool from WSDL to WS-Net. The paper BIB010 proposes a formal Petri Net semantics for BPEL which assures exception handling and compensations. Moreover, the authors present the parser which can automatically convert business process diagrams into Petri Net. Consequently, the semantics enabled many Petri Net verification tools to automatically analyze business processes. In the reference , the authors propose a framework which can translate Orc into colored Petri Net. Colored Petri Net has been proposed to model large scale systems more effectively. The framework and tool deal with recursion and data handling. Moreover, because the framework and tool can simulate and verify the behavior of process models at the design phase of information systems, users of them can detect and correct errors beforehand. Therefore, they contribute to raise the reliability of business process diagrams. Petri Net is the traditional and well-established technique, thus there have been many verification methods and tools. The essentials of the above efforts are how to translate business process diagrams into Petri Net. After that, we have a rich variety of tools for the verification. However, all the components in business process modeling notation cannot change into Petri Net. For instance, BPMN has various gateways, event triggers, loop activities, control flows, and nested/embedded subprocesses. It is difficult to define the correspondence of these objects to Petri Net. There is room for argument on the translation. ), Hoare's Calculus of Sequential Processes (CSP BIB001 ), the Algebra of Communicating Processes (ACP BIB002 ) by Bergstra and Klop, and the Language of Temporal Ordered Systems (LOTOS ) ISO standard. Process algebras are strict and well-established theories that support the automatic verification of properties of systems behavior as well as Petri Net. They also provide a rich theory on bisimulation analysis. The analysis is helpful to verify whether a service can substitute another service in a composition or the redundancy of a service BIB008 .
|
A Survey of Formal Verification for Business Process Modeling <s> Figure 5 The Revised Process Model with Usability <s> We describe a representation and set of inference techniques for the dynamic construction of probabilistic and decision-theoretic models expressed as networks. In contrast to probabilistic reasoning schemes that rely on fixed models, we develop a representation that implicitly encodes a large number of possible model structures. Based on a particular query and state of information, the system constructs a customized belief net for that particular situation. We develop an interpretation of the network construction process in terms of the implicit networks encoded in the database. A companion method for constructing belief networks with decisions and values (decision networks) is also developed that uses sensitivity analysis to focus the model building process. Finally, we discuss some issues of control of model construction and describe examples of constructing networks. <s> BIB001 </s> A Survey of Formal Verification for Business Process Modeling <s> Figure 5 The Revised Process Model with Usability <s> Probabilistic graphical models and decision graphs are powerful modeling tools for reasoning and decision making under uncertainty. As modeling languages they allow a natural specification of problem domains with inherent uncertainty, and from a computational perspective they support efficient algorithms for automatic construction and query answering. This includes belief updating, finding the most probable explanation for the observed evidence, detecting conflicts in the evidence entered into the network, determining optimal strategies, analyzing for relevance, and performing sensitivity analysis. The book introduces probabilistic graphical models and decision graphs, including Bayesian networks and influence diagrams. The reader is introduced to the two types of frameworks through examples and exercises, which also instruct the reader on how to build these models. The book is a new edition of Bayesian Networks and Decision Graphs by Finn V. Jensen. The new edition is structured into two parts. The first part focuses on probabilistic graphical models. Compared with the previous book, the new edition also includes a thorough description of recent extensions to the Bayesian network modeling language, advances in exact and approximate belief updating algorithms, and methods for learning both the structure and the parameters of a Bayesian network. The second part deals with decision graphs, and in addition to the frameworks described in the previous edition, it also introduces Markov decision processes and partially ordered decision problems. The authors also provide a well-founded practical introduction to Bayesian networks, object-oriented Bayesian networks, decision trees, influence diagrams (and variants hereof), and Markov decision processes. give practical advice on the construction of Bayesian networks, decision trees, and influence diagrams from domain knowledge. give several examples and exercises exploiting computer systems for dealing with Bayesian networks and decision graphs. present a thorough introduction to state-of-the-art solution and analysis algorithms. The book is intended as a textbook, but it can also be used for self-study and as a reference book. <s> BIB002 </s> A Survey of Formal Verification for Business Process Modeling <s> Figure 5 The Revised Process Model with Usability <s> Stochastic logic programs (SLPs) are logic programs with parameterised clauses which define a log-linear distribution over refutations of goals. The log-linear distribution provides, by marginalisation, a distribution over variable bindings, allowing SLPs to compactly represent quite complex distributions.We analyse the fundamental statistical properties of SLPs addressing issues concerning infinite derivations, 'unnormalised’ SLPs and impure SLPs. After detailing existing approaches to parameter estimation for log-linear models and their application to SLPs, we present a new algorithm called failure-adjusted maximisation (FAM). FAM is an instance of the EM algorithm that applies specifically to normalised SLPs and provides a closed-form for computing parameter updates within an iterative maximisation approach. We empirically show that FAM works on some small examples and discuss methods for applying it to bigger problems. <s> BIB003
|
To verify such uncertain decision, various logics introducing probability into firstorder predicate logic have been proposed BIB003 These studies make it possible to generate Bayesian Network BIB002 from predicate logic expression based on knowledge-based model construction BIB001 . Since there are some business processes which flow with non-programmable decision , the business process diagrams with uncertainness have to be verified by such logics. If the properties of business process models for Bayesian Network is defined (e.g., Figure 6 ), we may verify the diagrams based on probabilistic inference.
|
A Survey of Formal Verification for Business Process Modeling <s> Concluding Remarks <s> Statechart Diagrams provide a graphical notation for describing dynamic aspects of system behaviour within the Unified Modelling Language (UML). In this paper we present a translation from a subset of UML Statechart Diagrams - covering essential aspects of both concurrent behaviour, like sequentialisation, parallelism, non-determinism and priority, and state refinement - into PROMELA, the specification language of the SPIN model checker. SPIN is one of the most advanced analysis and verification tools available nowadays. Our translation allows for the automatic verification of UML Statechart Diagrams. The translation is simple, proven correct, and promising in terms of state space representation efficiency. <s> BIB001 </s> A Survey of Formal Verification for Business Process Modeling <s> Concluding Remarks <s> The Unified Modelling Language (UML) is a standardised notation for describing object oriented software designs. We present vUML, a tool that automatically verifies UML models where the behaviour of the objects is described using UML Statecharts diagrams. The tool uses the SPIN model checker to perform the verification, but the user does not have to know how to use SPIN or the PROMELA language. If an error is found during the verification, the tool creates a UML sequence diagram showing how to reproduce the error in the model. <s> BIB002 </s> A Survey of Formal Verification for Business Process Modeling <s> Concluding Remarks <s> Abstract The Unified Modeling Language provides two complementary notations, state machines and collaborations, for the specification of dynamic system behavior. We describe a prototype tool, HUGO , that is designed to automatically verify whether the interactions expressed by a collaboration can indeed be realized by a set of state machines. We compile state machines into a PROMELA model and collaborations into sets of Buchi automata (“never claims”). The model checker SPIN is called upon to verify the model against the automata. <s> BIB003 </s> A Survey of Formal Verification for Business Process Modeling <s> Concluding Remarks <s> Chance discovery is to become aware of a chance and to explain its significance, especially if the chance is rare and its significance is unnoticed. This direction matches with various real requirements in human life. This paper presents the significance, viewpoints, theories, methods, and future work of chance discovery. Three keys for the progress are extracted from fundamental discussions on how to realize chance discovery: (1) communication, (2) imagination, and (3) data mining. As an approach to chance discovery, visualized data mining methods are formalized as tools aiding chance discoveries on the basis of these keys. <s> BIB004
|
In this paper, we have presented the formal verification techniques which simulate and verify one's business process models at the design phase of enterprise information systems. These techniques can detect and correct errors of the models as early as possible and in any case before implementation. We have also shown future work of the formal verification for BPM. However, the comparison in this paper surveyed only the basic logics and the aims of the verification, thus we must also define the other quantitative information in order to choose which logics or methods better suit the formal verification of BPM. Moreover, there are the well-established practices which verify UML state machine diagrams for behavior with modelchecking BIB001 BIB002 [31] BIB003 . We should compare BPM verification with such studies. A prospect of the researches that we would like to deepen in future work is to determine the financial characteristics that each of the languages and models is able to describe in order to define a profitable business process. We are now discussing whether the chance discovery process BIB004 can be applied to the verification from the financial and business administration viewpoint.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> C. Structured Data Capture <s> There is a growing need for patient-specific and holistic modelling of the heart to support comprehensive disease assessment and intervention planning as well as prediction of therapeutic outcomes. We propose a patient-specific model of the whole human heart, which integrates morphology, dynamics and haemodynamic parameters at the organ level. The modelled cardiac structures are robustly estimated from four-dimensional cardiac computed tomography (CT), including all four chambers and valves as well as the ascending aorta and pulmonary artery. The patient-specific geometry serves as an input to a three-dimensional Navier–Stokes solver that derives realistic haemodynamics, constrained by the local anatomy, along the entire heart cycle. We evaluated our framework with various heart pathologies and the results correlate with relevant literature reports. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> C. Structured Data Capture <s> The American Medical Association asked RAND Health to characterize the factors that affect physician professional satisfaction. RAND researchers sought to identify high-priority determinants of professional satisfaction by gathering data from 30 physician practices in six states, using a combination of surveys and semistructured interviews. This article presents the results of the subsequent analysis. <s> BIB002 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> C. Structured Data Capture <s> BACKGROUND ::: Even though it takes up such a large part of all clinicians' working day the medical literature on documentation and its value is sparse. ::: ::: ::: METHODS ::: Medline searches combining the terms medical records, documentation, time, and value or efficacy or benefit yielded only 147 articles. This review is based on the relevant articles selected from this search and additional studies gathered from the personal experience of the authors and their colleagues. ::: ::: ::: RESULTS ::: Documentation now occupies a quarter to half of doctors' time yet much of the information collected is of dubious or unproven value. Most medical records departments still use the traditional paper chart, and there is considerable debate on the benefits of electronic medical records (EMRs). Although EMRs contains a lot more information than a paper record clinicians do not find it easy to getting useful information out of them. Unlike the paper chart narrative is difficult to enter into most EMRs so that they do not adequately communicate the patient's "story" to clinicians. Recent innovations have the potential to address these issues. ::: ::: ::: CONCLUSION ::: Although documentation is widespread throughout the health care industry there has been almost no formal research into its value, on how to enhance its value, or on whether the time spent on it has negative effects on patient care. <s> BIB003
|
EHRs can only achieve their full potential if time and cost associated with data capture can be kept under control. While a good deal of clinical data can be obtained from other venues such as laboratory or radiology systems or from devices (e.g., vital signs, ventilators), a significant amount of data must be entered by providers. Because of the time and effort required for providers to capture structured data, they often question if there is sufficient value to warrant the negative impact on productivity BIB003 , BIB002 . Contemporary EHRs are estimated to require additional 48 min per day, much of which is devoted to documentation , . Healthcare is complex, which is also reflected in the data: there are hundreds of thousands of clinical concepts that have to be represented. In order to accommodate this scale and simplify representations, coding systems have been adopted for clinical concepts. The concept of heart failure, for example, can be represented in the International Classification of Disease Version 9 Clinical Modification as "428.0." This approach facilitates using keyword value approaches to representing data. Unfortunately, there are multiple coding systems for most clinical concepts so heart failure can also be represented by I50 (ICD-10), 16209 ( DiseaseDB), D00633 (MESH), 42343007 (SNOMED), and others. Even more unfortunately, a good deal of data are coded using idiosyncratic clinical codes that are unique to a specific healthcare delivery system. This variation means that using the data often requires mapping or translation between coding systems which usually requires substantial human effort and, in some cases, a specific data model. In addition to direct entry by providers or their surrogates, structured data can be derived directly from unstructured data including free text, images, and other signals. Radiology involves the acquisition, analysis, storage, and handling of radiological images and certainly involves huge amounts of data, in particular, when the analysis involves time, as in angiography, or all three spatial dimensions, as in whole body screening. Pathology involves the analysis of tissue, cell, and body fluid samples, typically via microscopic imaging. As pathology is digitized, increasing amounts of digital data are generated and need to be handled and stored. The standard is that medical specialists interpret the radiological and pathological images and describe the findings in written free-text or unstructured reports, although there is a trend toward template-based semistructured reporting. The computerized analysis of radiological and pathological images is an established research area involving sophisticated algorithms and is becoming increasingly clinically relevant BIB001 - . The analysis typically involves some form of machine learning, and the emerging field of deep learning has an increasing impact . Analysis increasingly generates qualitative and quantitative labels or tabs, which can be used in integrated analytics studies . Written text is a major medium: The exact numbers vary, but a significant proportion of the clinically relevant information is only documented in a textual format. Besides radiological and pathological reports, medically relevant textual sources are reports from other departments, notes, referral letters, and discharge letters. Both researchers and commercial developers have devoted considerable effort to improve the efficiency of structured data capture from text and some hope that Natural Language Processing (NLP) will obviate the need for structured data capture; but advances have been incremental. While there is progress in focused areas, information extraction from clinical texts is notoriously difficult. Some of the reasons are that reports are ungrammatical, contain short phrases and nonstandardized and overloaded abbreviations, and employ an abundant use of negations and lists. Structured reporting, where the text is generated automatically and the physician simply enters keywords and short pieces of text, would be a great advance, but is currently not the standard , in part because it is typically more time consuming for the provider. Another issue is that the structured data entered by providers or extracted from text need to be represented such that the data can be "understood" by a computer, in other words, the healthcare system needs to be able to communicate effectively and in the same formalized language. Some languages are essentially simple taxonomies and vocabularies and are the basis for standards used in the billing process, such as ICD for diagnosis, CPT© for procedures, and SNOMED codes for diseases or conditions. For medications, there is the National Library of Medicine's RxNorm, the National Drug Code (NDC), and others. Logical Observation Identifiers Names and Codes (LOINC©) define universal standards for identifying medical laboratory and clinical observations. For billing purposes all involved players are highly motivated to employ the codes with great discipline. Implied statements in general take on simple forms, like "Patient X has Disease Y." This changes if one wants to express some detailed medical finding accurately. Consider the phrases "43 yo female with history of GERD woke up w/ SOB and LUE discomfort 1 day PTA. She presented to [**Hospital2 72**] where she was ruled out for MI by enzymes. She underwent stress test the following day at [**Hospital2 72**]. She developed SOB and shoulder pain during the test." In order to utilize the information represented in this text, an application would first need to map and code the entities in the phrases and then formulate statements relating the complex sequential observations with many subtle phrases, which only makes sense to a trained expert. This goes far beyond the expressiveness of currently used medical formal languages. Genomic, proteomic, and other molecular data (discussed more fully in Section VII), which are almost by their nature digital, will add an extensive amount and variety of structured data though, in current practice, an extremely limited subset derived from the molecular data will be all that is necessary for a particular application.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> E. Clinical Data Integration Efforts <s> Informatics for Integrating Biology and the Bedside (i2b2) is one of seven projects sponsored by the NIH Roadmap National Centers for Biomedical Computing (http://www. ncbcs.org). Its mission is to provide clinical investigators with the tools necessary to integrate medical record and clinical research data in the genomics age, a software suite to construct and integrate the modern clinical research chart. i2b2 software may be used by an enterprise’s research community tofind sets of interesting patients from electronic patient medical record data, while preserving patient privacy through a query tool interface. Project-specific mini-databases (“data marts”) can be created from these sets to make highly detailed data available on these specific patients to the investigators on the i2b2 platform, as reviewed and restricted by the Institutional Review Board. The current version of this software has been released into the public domain and is available at the URL: http://www.i2b2.org/software. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> E. Clinical Data Integration Efforts <s> tranSMART is an emerging global open source public private partnership community developing a comprehensive informatics-based analysis and data-sharing cloud platform for clinical and translational research. The tranSMART consortium includes pharmaceutical and other companies, not-for-profits, academic entities, patient advocacy groups, and government stakeholders. The tranSMART value proposition relies on the concept that the global community of users, developers, and stakeholders are the best source of innovation for applications and for useful data. Continued development and use of the tranSMART platform will create a means to enable “pre-competitive” data sharing broadly, saving money and, potentially accelerating research translation to cures. Significant transformative effects of tranSMART includes 1) allowing for all its user community to benefit from experts globally, 2) capturing the best of innovation in analytic tools, 3) a growing ‘big data’ resource, 4) convergent standards, and 5) new informatics-enabled translational science in the pharma, academic, and not-for-profit sectors. <s> BIB002
|
Some providers may have implemented a separate research data system such as i2b2 BIB001 or tranSMART BIB002 . These systems extract clinically relevant information from the EHR and from other clinical resources and databases and integrate them into the research database. A research database can be a great resource for data analytics project. Unfortunately installing a research database can be extremely demanding since it needs to access data that are in the data silos of the different departments. As discussed, these databases might all have different structures and use different terminologies. In contrast to clinical data, billing data-in part because of its simplicity and in part out of necessity-are consistently structured and are often part of a research database. Unfortunately, these data do not contain much of the clinically relevant information and may not accurately and fully reflect clinical reality. Providers may not be as careful in recording administrative data believing it is not critical to be exactly correct or, in some cases, billing data may be coded to maximize reimbursement rather than to most accurately reflect the patient's clinical status. Another important issue is that the temporal order of events is often not well documented in the data. To analyze the causal effects of a decision and to optimize decisions, it is important to know which information was available to the decision maker at the time of decision. At the current status of documentation, reconstructing the temporal order of events can be difficult.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> D. Indiana Network for Patient Care <s> BACKGROUND ::: There is great variation in choices of method and specific analytical details in epidemiological studies, resulting in widely varying results even when studying the same drug and outcome in the same database. Not only does this variation undermine the credibility of the research but it limits our ability to improve the methods. ::: ::: ::: METHODS ::: In order to evaluate the performance of methods and analysis choices we used standard references and a literature review to identify 164 positive controls (drug-outcome pairs believed to represent true adverse drug reactions), and 234 negative controls (drug-outcome pairs for which we have confidence there is no direct causal relationship). We tested 3,748 unique analyses (methods in combination with specific analysis choices) that represent the full range of approaches to adjusting for confounding in five large observational datasets on these controls. We also evaluated the impact of increasingly specific outcome definitions, and performed a replication study in six additional datasets. We characterized the performance of each method using the area under the receiver operator curve (AUC), bias, and coverage probability. In addition, we developed simulated datasets that closely matched the characteristics of the observational datasets into which we inserted data consistent with known drug-outcome relationships in order to measure the accuracy of estimates generated by the analyses. ::: ::: ::: DISCUSSION ::: We expect the results of this systematic, empirical evaluation of the performance of these analyses across a moderate range of outcomes and databases to provide important insights into the methods used in epidemiological studies and to increase the consistency with which methods are applied, thereby increasing the confidence in results and our ability to systematically improve our approaches. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> D. Indiana Network for Patient Care <s> The vision of creating accessible, reliable clinical evidence by accessing the clincial experience of hundreds of millions of patients across the globe is a reality. Observational Health Data Sciences and Informatics (OHDSI) has built on learnings from the Observational Medical Outcomes Partnership to turn methods research and insights into a suite of applications and exploration tools that move the field closer to the ultimate goal of generating evidence about all aspects of healthcare to serve the needs of patients, clinicians and all other decision-makers around the world. <s> BIB002
|
The Regenstrief Institute was an early advocate for clinical data interoperability based on information standards and leveraged that work to enable HIE both regionally and nationally. Regenstrief investigators implemented the Indianapolis Network for Patient Care (INPC) in 1995 with the goal of providing clinicians with data necessary for patient diagnosis and treatment at the point of care. In 2016, over 100 hospitals, thousands of physician practices, ambulance services, large local and the state public health departments, regional laboratories and imaging centers, and payors participated in the INPC. The federated data repository stores more than 4.7 billion records, including over 118 million text reports from almost 15 million unique patients. The data are stored in a standard format, with standardized demographic codes; laboratory test results are mapped to a set of common test codes with standard units of measure; medications, diagnoses, imaging studies, and report types are also mapped to standard terminologies. The flows of data that enable the INPC support results delivery, public health surveillance, results retrieval, quality improvement, research, and other services. Building on this experience, Regenstrief investigators have informed the development of the nationwide health information network program now called the eHealth Exchange ("Exchange"). The INPC data have been utilized by Regenstrief for many big data studies and projects including the following. • The Observational Medical Outcomes Partnership (OMOP) BIB001 and the subsequent Observational Health Data Science and Informatics (OHDSI) BIB002 projects to utilize large-scale observational data for drug safety studies. • The two projects were a basis for ConvergeHEALTH, an effort spearheaded by Deloitte that aims to offer comprehensive data sharing among key organizations. Deloitte has an analytics platform that allows hospital systems to compare results with tools designed to study certain patient outcomes: their OutcomesMiner tool helps users explore real-world outcomes for subpopulations of interest. • The Merck-Regenstrief Institute "Big Data" Partnership Academic-Industry Collaboration to Support Personalized Medicine was formed in 2012 to leverage the INPC to support a range of research studies that use clinical data to inform personalized healthcare. The partnership has funded 50 projects to date. Industry commentators have observed that such partnerships between industry and academia, and between and among other payers, are essential as neither sector alone can undertake such projects. • The Indiana Health Information, a nonprofit organization created to sustain the INPC's operations, entered into a partnership agreement with a commercial predictive analytics company, Predixion, to develop new predictive applications aimed at further supporting the patient and business needs of ACOs and hospitals. The INPC database supports Predixion's current and future solution development.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> E. Clinical Data Intelligence <s> The amount of data in our world has been exploding, and analyzing large data sets—so-called big data— will become a key basis of competition, underpinning new waves of productivity growth, innovation, and consumer surplus, according to research by MGI and McKinsey's Business Technology Office. Leaders in every sector will have to grapple with the implications of big data, not just a few data-oriented managers. The increasing volume and detail of information captured by enterprises, the rise of multimedia, social media, and the Internet of Things will fuel exponential growth in data for the foreseeable future. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> E. Clinical Data Intelligence <s> This article is about a new project that combines clinical data intelligence and smart data. It provides an introduction to the “Klinische Datenintelligenz” (KDI) project which is founded by the Federal Ministry for Economic Affairs and Energy (BMWi); we transfer research and development results (R&D) of the analysis of data which are generated in the clinical routine in specific medical domain. We present the project structure and goals, how patient care should be improved, and the joint efforts of data and knowledge engineering, information extraction (from textual and other unstructured data), statistical machine learning, decision support, and their integration into special use cases moving towards individualised medicine. In particular, we describe some details of our medical use cases and cooperation with two major German university hospitals. <s> BIB002 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> E. Clinical Data Intelligence <s> As a result of the recent trend towards digitization-- which increasingly affects evidence-based medicine, accountable care, personalized medicine, and medical "Big Data" analysis --growing amounts of clinical data are becoming available for analysis. In this paper, we follow the idea that one can model clinical processes based on clinical data, which can then be the basis for many useful applications. We model the whole clinical evolution of each individual patient, which is composed of thousands of events such as ordered tests, lab results and diagnoses. Specifically, we base our work on a dataset provided by the Charite University Hospital of Berlin which is composed of patients that suffered from kidney failure and either obtained an organ transplant or are still waiting for one. These patients face a lifelong treatment and periodic visits to the clinic. Our goal is to develop a system to predict the sequence of events recorded in the electronic medical record of each patient, and thus to develop the basis for a future clinical decision support system. For modelling, we use machine learning approaches which are based on a combination of the embedding of entities and events in a multidimensional latent space, in combination with Neural Network predictive models. Similar approaches have been highly successful in statistical models for recommendation systems, language models, and knowledge graphs. We extend existing embedding models to the clinical domain, in particular with respect to temporal sequences, long-term memories and personalization. We compare the performance of our proposed models with standard approaches such as K-nearest neighbors method, Naive Bayes classifier and Logistic Regression, and obtained favorable results with our proposed model. <s> BIB003 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> E. Clinical Data Intelligence <s> In clinical data sets we often find static information (e.g. patient gender, blood type, etc.) combined with sequences of data that are recorded during multiple hospital visits (e.g. medications prescribed, tests performed, etc.). Recurrent Neural Networks (RNNs) have proven to be very successful for modelling sequences of data in many areas of Machine Learning. In this work we present an approach based on RNNs, specifically designed for the clinical domain, that combines static and dynamic information in order to predict future events. We work with a database collected in the Charit\'{e} Hospital in Berlin that contains complete information concerning patients that underwent a kidney transplantation. After the transplantation three main endpoints can occur: rejection of the kidney, loss of the kidney and death of the patient. Our goal is to predict, based on information recorded in the Electronic Health Record of each patient, whether any of those endpoints will occur within the next six or twelve months after each visit to the clinic. We compared different types of RNNs that we developed for this work, with a model based on a Feedforward Neural Network and a Logistic Regression model. We found that the RNN that we developed based on Gated Recurrent Units provides the best performance for this task. We also used the same models for a second task, i.e., next event prediction, and found that here the model based on a Feedforward Neural Network outperformed the other models. Our hypothesis is that long-term dependencies are not as relevant in this task. <s> BIB004
|
Clinical Data Intelligence ("Klinische Datenintelligenz") is a German project funded by the German Ministry or Economics and Energy (BMWi) and involves two integrated care providers, i.e., the University Hospital Erlangen and the Charité Berlin, two globally acting companies, i.e., Siemens AG and the Siemens Healthineers, and application and research centers from the University of Erlangen, the German Research Centre for Artificial Intelligence (DFKI), Fraunhofer, and Averbis , BIB002 . The project puts particular emphasis on terminologies and ontologies, on metadata extraction from textual sources and radiological images, and on the integration of medical guidelines as a form of prior knowledge. As part of the project, a central research database is installed which serves all research and application subprojects. The project also addresses business models and app infrastructures suitable for large-scale data analytics. The core functionalities are realized by an integrated learning and decision system (ILDS). The ILDS accesses all patient-specific data and provides analytics and predictive and prescriptive functionalities. The ILDS models and analyzes clinical decision processes by learning from the EHR's structured data such as diagnosis, procedures, and lab results. The ILDS also analyzes medical history, radiology, and pathology reports and includes guideline information. In addition, the ILDS considers genomic data, and molecular data in general, to explore personalized medicine in the context of other clinical data. The ILDS will immediately be able to make predictions about common practice of the form: "For a patient with properties and problems X, procedure Y is typically done (in your clinic system)." More difficult, since it involves a careful analysis of confounders, is a prescription of the form: "For a patient with properties and problems X, procedure Y is typically done (in your clinic system) but procedure Z will probably result in a better outcome." An important outcome of the project will be a set of requirements for a clinical documentation that will enable more powerful data analytics in the future. For example, BIB001 Deep learning is one of the most exciting developments in machine learning in recent years. It is a field that attracts amazing talents with stunning successes in a number of applications. One of the driving forces in deep learning is DeepMind, a London-based company owned by Google. DeepMind Health is a project in which U.K. NHS medical data are analyzed. The agreement gives DeepMind access to healthcare data on more than a million patients . The first outcome is the mobile app Streams, which presents timely information that helps nurses and doctors detect cases of acute kidney injury. Other notable commercial deep learning efforts with relevance to healthcare are Deep Genomics (http://www.deepgenomics.com/), Entlitic (http://www.enlitic.com/), and Atomwise (http://www.atomwise.com/). clinical outcome is not always well documented; readmission within a certain period of time (typically a month) is sometimes taken for a negative outcome. Alternatively, one might define a hospital stay of more than a certain number of days as a negative outcome, where the threshold is specific to the Diagnosis Related Group (DRG). In some cases, for example, after a kidney transplantation or mastectomy, the patient is closely observed, and outcome information is available, possibly over patient lifetime. The ILDS partially uses deep learning (more specifically recurrent neural networks) to model the sequential decision processes in clinics BIB003 . BIB001 The project addresses two use cases in detail. The first concerns nephrology. Kidney diseases cause a significant financial burden for the healthcare system. The aim of this work is to systematically investigate drug-drug interaction (DDI) and adverse drug reactions (ADRs) in patients after renal transplantation and to realize an integrated decision support system. The use case is particularly interesting since longitudinal data covering several decades are available and since outcome is usually reported. First ILDS results are reported in BIB003 and BIB004 . The second use case concerns breast cancer, the most common malignancy in women. Relevant events are screening, diagnosis, therapy, and follow-up care. Of special interest here is the determination of risk factors, the evaluation of the therapy, and the prediction of side effects.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> G. Comments on the Value of Big Data Studies <s> There is growing concern in the scientific community that many published scientific findings may represent spurious patterns that are not reproducible in independent data sets. A reason for this is that significance levels or confidence intervals are often applied to secondary variables or sub-samples within the trial, in addition to the primary hypotheses (multiple hypotheses). This problem is likely to be extensive for population-based surveys, in which epidemiological hypotheses are derived after seeing the data set (hypothesis fishing). We recommend a data-splitting procedure to counteract this methodological problem, in which one part of the data set is used for identifying hypotheses, and the other is used for hypothesis testing. The procedure is similar to two-stage analysis of microarray data. We illustrate the process using a real data set related to predictors of low back pain at 14-year follow-up in a population initially free of low back pain. “Widespreadness” of pain (pain reported in several other places than the low back) was a statistically significant predictor, while smoking was not, despite its strong association with low back pain in the first half of the data set. We argue that the application of data splitting, in which an independent party handles the data set, will achieve for epidemiological surveys what pre-registration has done for clinical studies. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> G. Comments on the Value of Big Data Studies <s> Big Data bring new opportunities to modern society and challenges to data scientists. On the one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This paper gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogenous assumptions in most statistical methods for Big Data cannot be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions. <s> BIB002 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> G. Comments on the Value of Big Data Studies <s> ABSTRACTPrecision medicine aims to combine comprehensive data collected over time about an individual’s genetics, environment, and lifestyle, to advance disease understanding and interception, aid drug discovery, and ensure delivery of appropriate therapies. Considerable public and private resources have been deployed to harness the potential value of big data derived from electronic health records, ‘omics technologies, imaging, and mobile health in advancing these goals. While both technical and sociopolitical challenges in implementation remain, we believe that consolidating these data into comprehensive and coherent bodies will aid in transforming healthcare. Overcoming these challenges will see the effective, efficient, and secure use of big data disrupt the practice of medicine. It will have significant implications for drug discovery and development as well as in the provisioning, utilization and economics of health care delivery going forward; ultimately, it will enhance the quality of care for the... <s> BIB003
|
Often the goal of big data studies is to draw causal conclusions, e.g., on the effectiveness of a drug or on a possible disease cause, and one needs to consider the value of an observational big data study versus classical randomized controlled trials (RCTs). Prospective RCTs are often cited as the gold standard for evidence since by a careful study design, effects of hidden confounders can be minimized. But RTCs also have their shortcomings, in particular, due to the way patients are selected for a study and due to the small sample size. RCTs are often done in relatively healthy homogeneous groups of patients chosen to be healthy except for the condition of interest and free of common diseases like diabetes or high blood pressure and neither extremely young or old . If patients have several problems, treating them as if they were mutually independent might be bad in general, and information on treatment-treatment interactions might not be easily assessable through RCTs. Also, interplay between diseases like hypertension, high cholesterol, and depression might not become apparent in RCTs. Since patients are difficult to recruit in general, and the management of clinical studies is costly, sample size is often small. For the same reasons, findings need to be general and not personalized and there are long delays until a result is certain and can become clinical practice. It has been suggested that patient-reported outcome measures are often better predictors of long-term prognosis . Nonrandomized, quasi-experimental studies are sometimes employed but provide less evidence than RCTs . Big data analyses, in contrast, consider data from a large variety of patients and potentially can draw conclusions from a much larger sample. They are based on the natural population of patients, and conclusions can be personalized. For instance, with depressed diabetic patients, one would want to compare hospitalization rates between those taking antidepressants and those who were not, to determine if more patients should receive psychiatric treatment to help them manage their health. Currently such studies involve great efforts. In future big data healthcare, these questions could be answered by a simple database query . Big data analysis mostly concerns observational studies (cohort studies, case-control studies) whose conclusions are considered by some to be statistically less reliable. The main reason is that hidden confounders might produce correlations, independent of a causal effect. Confounders are variables that both influence clinical decisions and, at the same time, outcome. Multivariate models should be considered where predictors contain all variables that were used in decision making. Unfortunately, some of these variables might not be available for analysis, such as patient symptoms and patient complains, which are often not well documented. Data collection might introduce various forms of biases. Examples are batch effects, which might occur in the merging of data from different institutions; batch effects can be addressed by a careful statistical analysis BIB003 , BIB002 . It is still unclear if physicians are ready to use evidence from big data. Generally accepted is the generation of novel hypotheses by big data studies, which are then clinically validated, although clinicians are critical toward hypothesis fishing BIB001 . Of course, clinical studies are very expensive and would only be initiated with significant evidence from data and with the prospect of large benefits. A desired and well-accepted outcome is the discovery of novel patient subgroups based on risk of disease, or response to therapy, using diagnostic tests enabling targeted therapy. This is the basis for a precision medicine (see Section VII). For example, asthma is largely regarded as a single disease and current treatment options tend to address its symptoms rather than its underlying cause. It is now accepted that asthma patients can be grouped according to patterns of differential gene expression and clinical phenotype with group-specific therapies . A predictive or prescriptive analysis might output a prediction (e.g., prediction of some clinical end point), or a ranking or prioritization of treatments. In these cases, the output might have been calculated based on many patient dimensions and this process might be difficult to interpret. Prioritization is currently still contrary to medical tradition and it remains to be seen if the medical profession will accept this aspect of a big data decision support system. It is important to understand why machine learning solutions typically work with many inputs. In a perfect situation, a diagnostic test can reveal the cause of a problem and the subsequent therapies solve the problem. In reality, even with all advances in diagnostics, we are often still very far from being able to completely describe the health status of an individual. Technically, the health status of a patient consists of may dimensions and only some of these dimensions (i.e., some infections, some cancer types) can be inferred by specific diagnostic tests. In big data analysis, one is partially doing "new medicine," i.e., one might address problems from new disease subgroups or syndromes that cannot be detected unambiguously with existing diagnostic tests. Since the statistical model then implicitly needs to infer the latent causes from observed proxies, the models often become high dimensional, and their predictions become difficult to interpret by humans, although predictive performance might be excellent. This is an effect observed in a multitude of predictive machine learning applications in and outside healthcare. The big data perspective is: If there are latent diseases, disease subgroups, or syndromes, they might be reflected by a large number of observed dimensions.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Data Accessible to Payers: Billing Data <s> New reimbursement policies and pay-for-performance programs to reward providers for producing better outcomes are proliferating. Although electronic health record (EHR) systems could provide essential clinical data upon which to base quality measures, most metrics in use were derived from administrative claims data. We compared commonly used quality measures calculated from administrative data to those derived from clinical data in an EHR based on a random sample of 125 charts of Medicare patients with diabetes. Using standard definitions based on administrative data (which require two visits with an encounter diagnosis of diabetes during the measurement period), only 75% of diabetics determined by manually reviewing the EHR (the gold standard) were identified. In contrast, 97% of diabetics were identified using coded information in the EHR. The discrepancies in identified patients resulted in statistically significant differences in the quality measures for frequency of HbA1c testing, control of blood pressure, frequency of testing for urine protein, and frequency of eye exams for diabetic patients. New development of standardized quality measures should shift from claims-based measures to clinically based measures that can be derived from coded information in an EHR. Using data from EHRs will also leverage their clinical content without adding burden to the care process. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Data Accessible to Payers: Billing Data <s> I. INTRODUCTION When Lind Weaver starting receiving collections demands for a foot amputation she never had, she assumed it was a clerical error.1 Unfortunately, the operation had been performed on someone pretending to be Weaver, causing Weaver's medical history to become entangled in the thief's.2 Media reports about identity theft show Weaver's experience is far from unique. For example, a Chicago man was arrested after using his friend's identity to obtain $350,000 worth of cardiovascular surgery at a local hospital.3 Hackers broke into the medical records of thousands of University of California students.4 A staff member left a laptop containing records of patients of a local AIDS clinic on Boston public transportation.5 Further opportunities for thieves lurk in every unshredded envelope, online transaction or credit card purchase. Breaches of financial data, often the result of hacking or theft or loss of sensitive computer equipment are routine fixtures of the news cycle.6 Consumers are encouraged to check their credit scores and monitor their accounts for any suspicious activity.7 In sum, we are being bombarded with warnings about the threat of identity theft. This media saturation focuses on the misuse of a data linked to a victim's identity to gain access to consumer credit tools such as credit cards and loans. Yet, medical identity theft, what Lind Weaver experienced, lurks in the background. Medical identity theft consists of the misuse of personal information to gain access to healthcare.8 A 2006 report by the Federal Trade Commission (FTC) estimated that there were at least 250,000 victims of medical identity theft for the period 2001-2006." The actual number is likely even higher.10 In a more recent survey of identity theft victims assisted in 2008 by the non-profit Identity Theft Resource Center, two thirds of the 100 victims surveyed reported being billed for medical services they did not receive.11 To some extent the emergence of medical identity theft is not surprising. First, healthcare providers are the largest compilers of personal data12 and are just as vulnerable to attack as the financial industry.13 Second, the high cost of health care creates an incentive to steal the identity of someone with insurance in order to obtain needed health care services, to further drug-seeking behavior, or to defraud third-party payers.14 In addition to financial harms such as being billed for services not rendered, medical identity theft can introduce inaccuracies into a victim's medical records, causing a cascade of clinical, insurance, and even reputational harms. Unlike victims of financial identity theft who can use the credit reporting system to recover from financial identity theft, victims of medical identity theft lack similar statutory resources, and there are few available private remedies. Further, structural and regulatory features of the healthcare system, including those governed by the Health Insurance Portability and Accountability Act of 1996 (HIPAA)15 make it extremely difficult for victims to discover and remedy the damage caused to their medical records by an identity thief. To put it simply, "[tjhere is no single place individuals can go to locate and correct inaccurate medical information."16 Current regulatory focus on increasing privacy and security through technological improvement, such as the HITECH Act amendments to HIPAA and the push to develop electronic health records (EHRs) do nothing to address victims' access problems to their own medical records. Further there is no private incentive to develop resources for victims. Finally, new regulations requiring health care providers to prevent fraud and new data breach notification rules do not resolve the basic problem of access. This note will argue that, given the fragmented nature of the healthcare market, a new federal regulatory initiative modeled on what is available to victims of financial identity theft is necessary to give victims an effective means of protecting the integrity of their personal health records. … <s> BIB002 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Data Accessible to Payers: Billing Data <s> The amount of data in our world has been exploding, and analyzing large data sets—so-called big data— will become a key basis of competition, underpinning new waves of productivity growth, innovation, and consumer surplus, according to research by MGI and McKinsey's Business Technology Office. Leaders in every sector will have to grapple with the implications of big data, not just a few data-oriented managers. The increasing volume and detail of information captured by enterprises, the rise of multimedia, social media, and the Internet of Things will fuel exponential growth in data for the foreseeable future. <s> BIB003
|
The most common situation where data are leaving the clinic is when claims are filed with a payer, e.g., a health insurer or a health plan. Depending on the particular reimbursement rules in place, payers see data of varying levels of detail, quality, and biases. Unfortunately, claims data may not fully reflect a patient's burden of illness BIB001 , . While the appropriateness of billing data to clinical research is often debated, many, many studies have used these data to guide clinical care, policy, and reimbursement. Claims data provide a holistic view of the patient across providers for a specific period of time and these data permit a patient-centric view on health. Claims data also provide direct and indirect evidence of outcome, e.g., by analyzing readmissions, and inform on cost efficiencies and treatment quality across providers. Payer organizations are increasingly interested in better understanding their customers, in this case their patients. Surveys, questionnaires, call center data, and increasingly social media, including tweets and blogs, are analyzed for gaining insights to improve quality of services and to optimize offerings. A major concern is the detection and prevention of abuse and fraud. A 2011 McKinsey report stated that fighting healthcare fraud with big data analysis can be quite effective BIB003 . Healthcare fraud in the United States alone involves tens of billions of dollars of damages each year and fighting fraud is one of the obvious activities to immediately reducing healthcare costs. Note that some forms of fraud actually do not only harm the payer but directly the patient (e.g., by unnecessary surgery) BIB002 , . Naturally, there is a gray zone between charging for justified claims on the one side and abuse and fraud on the other side. Certainly, billing for services never provided, e.g., for fictitious patients or deceased patients, is clearly fraud, but if an expensive treatment is necessary or not in a case might be debatable. Technical solutions focus on the detection of known fraud patterns, the prioritization of suspicious cases, and the identification of new forms of fraud. A more sophisticated approach uses statistical models of clinical pathways and best practices to detect abnormal claims (against the population) and analyzes suspicious temporal changes in charging patterns within the same provider. In addition, one can analyze different kinds of provider networks, where nodes are the providers and the links are common patients, analyzing homophily or "guilt by association" patterns. Another measure is the black listing of providers. Most commercial systems use a combination of different strategies . Despite all these efforts, and mostly due to the fragmentation in the system and a huge gray zone, it is estimated that only a small percentage of the fraud actually occurring is currently being detected.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> F. Incentive Programs <s> The burgeoning precision-medicine agenda focuses on detecting and curing disease at the individual level, but there are multiple contributors to the production of population health, and clinical intervention cannot remedy health inequities. <s> BIB001
|
The wording is dramatic: Some argue that healthcare is undergoing the most significant changes in its history, driven by the spiraling cost of care, shifting reimbursement models, and changing expectations of the consumer. Reforming the healthcare system to reduce the rate at which costs have been increasing while sustaining its current quality might be critical to many industrialized countries. An aging population and the emergence of new, more expensive treatments will accelerate this trend. It has been argued that by far the greatest savings could be achieved by population-wide healthier lifestyles, which would largely prevent cardiovascular diseases and chronic conditions such as diabetes. Chronic conditions account for an astounding 75% of healthcare costs in the United States , . There is some hope that the proliferation of fitness and health apps might be greatly beneficial to population health (see Section VI). Population health management tries to improve the situation by different measures such as a value-based reimbursement system causing providers to change the way they bill for care. The goal is to align incentives with quality and value. Instead of providers being paid by the number of visits and tests they order (fee for service), their payments are increasingly based on the value of care they deliver (value-based care). For those providers and healthcare systems that cannot achieve the required scores, the financial penalties and lower reimbursements will create a significant financial burden. An important instrument in the United States is the HITECH Act. It was enacted under the American Recovery and Reinvestment Act (ARRA) of 2009. Under the HITECH Act, the U.S. Department of Health and Human Services (HHS) is spending several tens of billions of U.S. dollars to promote and expand the adoption of health information technology to enable a nationwide network of EHRs. This can then be the basis for informed population health management and for improving healthcare quality, safety, and efficiency, in general. The general goals are to improve care coordination, reduce healthcare disparities, engage patients and their families, and improve population and public health, by, at the same time, ensuring adequate privacy and security. The implementation is in three stages. An organization must prove to have successfully implemented and used a stage for a minimum of time before being able to move to a higher stage. If stages are successfully reached, financial incentives in Medicaid and Medicare are being paid. If stages are not reached, financial penalties can be implemented by both systems. In Stage 1, the participating institutions do not only need to introduce an EHR but also need to demonstrate their meaningful use. The core set of requirements include the use computerized order entry for medication orders, the implementation of drug-drug, and drug-allergy checks, and the implementation of one clinical decision support rule. Also the protection of the electronic health information (privacy and security) needs to be demonstrated. Stage 2 introduces new requirements, such as demonstrating the ability to electronically exchange key clinical information between providers of care and patient-authorized entities. HIE (see Section IV-D) has emerged as a core capability for hospitals for Stage 2. Stage 3 of meaningful use is shaping up to be the most challenging and detailed level yet for healthcare providers. Among the elements are additional quality reporting, clinical decision support, and security risk analysis. The Stage 3 rule lists clinical decision support as one of the eight key objectives. Unlike Stage 1 which required one clinical decision support rule, Stages 2 and 3 specifically require the use of five clinical decision support interventions. Although welcomed by many, there also has been criticism of HITECH related to the increased reporting burden and the focus on reporting requirements and not on outcomes. The HITECH act provides many opportunities for data mining and text mining, for example, in the development of certified tools which provides evidence that a provider is fulfilling the various meaningful use criteria. Other incentive programs have been put in place as well. The New York State Department of Health has instituted the Delivery System Reform Incentive Payment Program with the goal of transforming NY Medicaid healthcare delivery to reduce avoidable hospitalizations by 25%. More than $8 billion will be paid out in incentive and infrastructure payments to 25 Preferred Provider Systems (PPSs) provided they meet this ambitious goal in five years. The 25 PPSs are each geographically local networks of varying size (from 100+ to near 500+) including hospitals, physician practices, imaging centers, SNFs, rehab, and hospice, who would normally compete for patients, but have voluntarily come together to form trusted health networks (i.e., a PPS). They have agreed to share patient data and coordinate patient care to improve patient care and experience through a more efficient, patient-centered, and coordinated system. The PPSs have "signed up" for different targeted programs (e.g., targeting mental health, fetal-maternal heath, diabetes, pediatric asthma, etc.) depending on community health assessments they performed in their area. Although population health management might seem to be slow moving and bland if compared to the more visible precision medicine initiatives, it has recently been argued that the impact of the former might be dramatically greater, if one looks at the current state of the art BIB001 , . Bayer and Galea write, "Looking at diabetes, precision medicine may help a few scattered patients in the right clinical trials to tackle their Type 1 diabetes, but it may not prevent the 28 percent of undiagnosed Type 2 diabetics from experiencing adverse effects from a lack of treatment the way a robust risk stratification and predictive analytics program might" BIB001 .
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Analyzing Traces <s> An increasing number of public and commercial initiatives invite individuals to participate in scientific research via the internet (Table 1). People are asked to provide information about personal medical history, medications, physical traits and measurements, ethnicity/ancestry, lifestyle and environmental exposures, and to donate biological material, generally saliva or blood, for DNA analysis. Some initiatives, such as the Personal Genome Project, have been launched with the specific goal of conducting scientific research, whereas others perform scientific analyses using data that were at least partly collected for other purposes. For example, PatientsLikeMe is an online community where patients can share information on symptoms, health state, and treatments to learn from each others' experiences, and the company 23andMe sells personal genome tests to individuals who want to learn their genetic risks of common diseases, carrier status of rare diseases, response to drug treatment, and ancestry. Data are collected predominantly through self-report online questionnaires and some initiatives offer the opportunity to make data accessible for the public. For example, the Personal Genome Project publishes anonymized data online and participants of PatientsLikeMe can choose to publish all data publicly available on the web or make data accessible only to registered users. ::: ::: ::: ::: Table 1 ::: ::: Examples of online research initiatives. ::: ::: ::: ::: Strong claims regarding the benefits of research using these resources are often made in order to encourage individuals to provide personal (health) information. For example, 23andWe, the research arm of 23andMe “gives customers the opportunity to leverage their data by contributing it to studies of genetics. With enough data, we believe 23andWe can produce revolutionary findings that will benefit us all” [1]. PatientsLikeMe tells patients that sharing personal stories and health data does not only enable individuals to “put your disease experiences in context and find answers to the questions you have” but also gives “the opportunity to help uncover great ideas and new knowledge” [2]. But how valid are these claims? Can online data collection lead to major breakthroughs in health research? We worry that overstating the conclusions that can be drawn from these resources may impinge on individual autonomy and informed consent. Just as researchers must take care to accurately convey direct benefits to study participants (which, we argue, in these situations are often small), they should also describe the likely outcomes and known limitations of observational studies conducted using volunteers. Clarity regarding the benefits of research using solicited personal data is particularly important when the data collected are also used for other purposes (e.g., PatientsLikeMe may sell members' information to pharmaceutical and insurance companies [2]), lest the allure of participation in a scientific study be used as a Trojan horse to entice individuals to part with information they might not otherwise volunteer. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Analyzing Traces <s> For several years personal genome tests have been offered directly to consumers via the internet. Based on single genome scans, these direct-toconsumer (DTC) tests predict susceptibility to common multifactorial diseases, such as Type 2 diabetes, coronary heart disease and nonfamilial cancer, inform about predisposition to drug response, report carrier status for monogenic diseases, or provide all of the above. The market is served by a few major players, such as 23andMe and deCODEme, and numerous lesser-known companies such as YouScript, GenePlanet and Theranostics [101–105]. The market for DTC personal genome testing is steadily increasing, even though the evidence on the predictive ability and clinical utility of these tests is limited. The few available studies have shown that predicted risks of multifactorial diseases differed between companies and were sometimes even contradictory for the same individual [1,106], but large-scale studies on the predictive ability are lacking. From prediction studies that investigated genetic risk models based on different but comparable selections of SNPs, we know that the predictive ability of genetic testing for multifactorial diseases is low to moderate, except when one or more SNPs had substantial impact on disease risk [2]. From this indirect evidence it is concluded that the DTC offer of genetic testing for multifactorial diseases is premature. To date, most of the discussion about DTC personal genome tests has focused on the prediction of these multifactorial diseases and less attention has been given to the predictive ability and clinical utility of pharmacogenetic testing. Yet, many companies offer pharmacogenetic testing to inform about the genetic susceptibility to drug response and side effects of treatment [3], such as the efficacy of clopidogrel and the risk <s> BIB002 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Analyzing Traces <s> In February 2013, Google Flu Trends (GFT) made headlines but not for a reason that Google executives or the creators of the flu tracking system would have hoped. Nature reported that GFT was predicting more than double the proportion of doctor visits for influenza-like illness (ILI) than the Centers for Disease Control and Prevention (CDC), which bases its estimates on surveillance reports from laboratories across the United States ( 1 , 2 ). This happened despite the fact that GFT was built to predict CDC reports. Given that GFT is often held up as an exemplary use of big data ( 3 , 4 ), what lessons can we draw from this error? <s> BIB003 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Analyzing Traces <s> Large-scale aggregate analyses of anonymized data can yield valuable results and insights that address public health challenges and provide new avenues for scientific discovery. These methods can extend our knowledge and provide new tools for enhancing health and wellbeing. However, they raise questions about how to best address potential threats to privacy while reaping benefits for individuals and to society as a whole. The use of machine learning to make leaps across informational and social contexts to infer health conditions and risks from nonmedical data provides representative scenarios for reflections on directions with balancing innovation and regulation. <s> BIB004
|
Statistics on anonymized search query logs and traces in social media can be analyzed to inform public health, epidemiologists, and policy makers. It can support the early detection of epidemics, the analysis and modeling of the flow of illness, and other purposes BIB004 . Infodemiology is a new term standing for the large-scale analyses of anonymized traces, which can potentially yield valuable results and insights. The analysis can address public health challenges and provide new avenues for scientific discovery BIB004 . A widely discussed example is the analysis of search query logs as indicators for disease outbreaks. The idea is that social media and search logs might indicate an outbreak of an infectious disease like a flu immediately, including detailed temporal-spatial information of its spread. Previously, such outbreaks might go unnoticed for days of even weeks. But models have proven difficult. Google flu, for example, predicted well initially but the fit was very poor later BIB003 , . Another application is the detection of adverse drug reactions, which could be improved by jointly analyzing data from the U.S. Food and Drug Administration's Adverse Event Reporting System, anonymized search logs and social media data BIB004 . The analysis of patients' traces has increasing importance in pharmacovigilance, which concerns the collection, detection, assessment, monitoring, and prevention of adverse effects with pharmaceutical products. Still there is little experience yet in the quality, reliability, and biases in data generated from web query logs and social network sites and conclusions should be drawn with great caution BIB001 , BIB002 . There is also a danger in these developments: The same traces, when reidentified, can be aimed at making inferences about unique individuals that could be used to infer their health status. Many problems are associated, e.g., with social scoring in healthcare. BIB004 reports on a Twitter suicide prevention application called Good Samaritan that monitored individuals' tweets for words and phrases indicating a potential mental health crisis. The service was removed after increasing complaints about violations of privacy and imminent dangers of stalking and bullying. As pointed out in BIB004 , health issues can also be inferred from seemingly unrelated traces. Simply changing communication patterns on social networks and internet search might indicate a new mother at risk for postpartum depression. Another issue is that some companies are working together with analytic experts to track employees' search queries, medical claims, prescriptions, and even voting habits to get insight into their personal lives . Although HIPAA legislation forbids employers to view their employees' health information, this does not apply to third parties. A company which received public attention is Castlight, which gathers data on workers' medical information, such as who is considering pregnancy or who may need back surgery. Castlight's policy is to only inform and advice the individuals directly and only report statistics to employers. These issues are increasingly addressed by regulators, e.g., in the United States by the Americans with Disabilities Act (ADA) and the Genetic Information Non-Discrimination Act (GINA). Horvitz and Mulligan BIB004 point out the technical difficulties in protecting the citizens against violations, in the face of powerful machine learning algorithms which can "jump categories": Machine learning can enable inferences about health conditions from nonmedical data generated far outside the medical context BIB004 .
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> C. PatientsLikeMe <s> Background: This project investigates the ways in which patients respond to the shared use of what is often considered private information: personal health data. There is a growing demand for patient access to personal health records. The predominant model for this record is a repository of all clinically relevant health information kept securely and viewed privately by patients and their healthcare providers. While this type of record does seem to have beneficial effects for the patient-physician relationship, the complexity and novelty of these data coupled with the lack of research in this area means the utility of personal health information for the primary stakeholders -- the patients-is not well documented or understood. ::: Objective: PatientsLikeMe® is an online community built to support information exchange between patients. The site provides customized disease-specific outcome and visualization tools to help patients understand and share information about their condition. We begin this paper by describing the components and design of the online community. We then identify and analyze how users of this platform reference personal health information within patient-to-patient dialogues. ::: Methods: Patients diagnosed with amyotrophic lateral sclerosis (ALS) post data on their current treatments, symptoms, and outcomes. These data are displayed graphically within personal health profiles and are reflected in composite community-level symptom and treatment reports. Users review and discuss these data within the Forum, private messaging, and comments posted on each others' profiles. We analyzed member communications that referenced individual-level personal health data to determine how patient peers use personal health information within patient-to-patient exchanges. ::: Results: Qualitative analysis of a sample of 123 comments (about 2% of the total) posted within the community revealed a variety of commenting and questioning behaviors by patient members. Members referenced data to locate others with particular experiences to answer specific health-related questions, proffer personally acquired disease-management knowledge to those who are most likely to benefit from it, and foster and solidify relationships based on shared concerns. ::: Conclusions: Few studies examine the use of personal health information by patients themselves. This project suggests how patients who choose to explicitly share health data within a community may benefit from the process, helping patients engage in dialogues that may inform disease self-management. We recommend that future designs make each patient's health information as clear as possible, automate matching of people with similar conditions and using similar treatments, and integrate data into online platforms for health conversations. ::: Keywords: personal health records; data visualization; personal monitoring; technology; healthcare; self-help devices; personal tracking; social support; online support group; online health community; ::: Social Uses of Personal Health Information Within PatientsLikeMe (4 Aud 1000 Frost Massagli) View SlideShare presentation or Upload your own. (tags: medicine20 phr ) [] <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> C. PatientsLikeMe <s> PatientsLikeMe (PLM) is an online community where patients share their personal experiences with a disease, find other patients like them, and learn from each other. The company was founded by Jamie and Ben Heywood when their 29-year-old brother was diagnosed with ALS or Lou Gehrig's disease. In less than five years, PLM has grown to 15 patient communities where over 80,000 patients discuss 19 diseases. In December 2010, PLM is discussing its planned launch of a General Platform that would expand the number of diseases covered from 19 to over 3,500. Is it the right move, and what does PLM need to do to make it a success?Learning Objective: To understand how an online community is built and monetized and to highlight the challenges in growing the platform to a larger scale. <s> BIB002
|
An openly commercial social network initiative is PatientsLikeMe BIB002 , with several hundred thousands of patients using the platform and addressing more than a thousand diseases. The majority of users have neurological diseases such as ALS, multiple sclerosis, and Parkinson's, but PatientsLikeMe is also increasingly addressing AIDS and mood disorders BIB001 , . PatientsLikeMe is not merely a chat board with selfhelp news but also collects quantitative data. It has designed several detailed questionnaires which are circulated regularly to its members. For example, epileptics can enter their seizure information into a seizure monitor. It has a survey tool to measure how closely patients adhere to their treatment regimen, but also scans language in the chat boards for alarming words and expressions. Patients-LikeMe offers a number of services. For example, it created a contrast sensitivity test together with the Massachusetts Eye and Ear Hospital for people with Parkinson's and hallucinations that come with mood disorders. http://www.optimizedcare.net/ The business model of PatientsLikeMe is not based on advertising. Instead, the company has based its business model around aligning patient interests with industry interests, i.e., accelerated clinical research, improved treatments, and better patient care. To achieve these goals, Patients-LikeMe sells aggregated, deidentified data to its partners, including pharmaceutical companies and medical device makers. In this way, PatientsLikeMe aims to help partners in the healthcare industry better understand the real-world experiences of patients as well as the real-world course of disease. Some of PatientsLikeMe's past and present partners include UCB, Novartis, Sanofi, Avanir Pharmaceuticals, and Acorda Therapeutics.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> V I. CON T IN UOUS HE A LT HC A R E <s> The author presents information on mobile health (mHealth) technology and argues that mHealth applications could be used for biomedical research as of June 2012. Topics include cell phones and the wireless transmission of health data, the cost-effectiveness of health assessments with mobile devices, and how health practitioners and software developers should collaborate to create innovative mHealth technologies. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> V I. CON T IN UOUS HE A LT HC A R E <s> Objective The aim of this systematic review was to synthesize current knowledge of the factors influencing healthcare professional adoption of mobile health (m-health) applications. ::: ::: Methods Covering a period from 2000 to 2014, we conducted a systematic literature search on four electronic databases (PubMed, EMBASE, CINAHL, PsychInfo). We also consulted references from included studies. We included studies if they reported the perceptions of healthcare professionals regarding barriers and facilitators to m-health utilization, if they were published in English, Spanish, or French and if they presented an empirical study design (qualitative, quantitative, or mixed methods). Two authors independently assessed study quality and performed content analysis using a validated extraction grid with pre-established categorization of barriers and facilitators. ::: ::: Results The search strategy led to a total of 4223 potentially relevant papers, of which 33 met the inclusion criteria. Main perceived adoption factors to m-health at the individual, organizational, and contextual levels were the following: perceived usefulness and ease of use, design and technical concerns, cost, time, privacy and security issues, familiarity with the technology, risk-benefit assessment, and interaction with others (colleagues, patients, and management). ::: ::: Conclusion This systematic review provides a set of key elements making it possible to understand the challenges and opportunities for m-health utilization by healthcare providers. <s> BIB002 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> V I. CON T IN UOUS HE A LT HC A R E <s> Background: Despite the recent explosion of the mobile health (mHealth) industry and consumer acquisition of mHealth tools such as wearable sensors and applications (apps), limited information is known about how this technology can sustain health behavior change and be integrated into health care. Objective: The objective of the study was to understand potential users’ views of mHealth technology, the role this technology may have in promoting individual activity goals aimed at improving health, and the value of integrating mHealth technology with traditional health care. Methods: Four focus groups were conducted with adults interested in sharing their views on how mHealth technology could support wellness programs and improve health. Participants (n=30) were enrolled from an employee population at an academic health institution. Qualitative thematic analysis was used to code transcripts and identify overarching themes. Results: Our findings suggest that tracking health data alone may result in heightened awareness of daily activity, yet may not be sufficient to sustain use of mHealth technology and apps, which often have low reuse rates. Participants suggested that context, meaning, and health care partnerships need to be incorporated to engage and retain users. In addition to these findings, drivers for mHealth technology previously identified in the literature, including integration and control of health data were confirmed in this study. Conclusions: This study explores ways that mHealth technologies may be used to not only track data, but to encourage sustained engagement to achieve individual health goals. Implications of these findings include recommendations for mHealth technology design and health care partnership models to sustain motivation and engagement, allowing individuals to achieve meaningful behavior change. [JMIR Mhealth Uhealth 2016;4(1):e5] <s> BIB003
|
With the tremendous technological progress and prevalence of mobile devices, the disruptive potential of mobile health, and also more general technology-enabled care, is frequently discussed BIB001 , . A new generation of affordable sensors is able to collect health data outside the clinic in an unprecedented quality and quantity. This enables the transition from episodic healthcare, dominated by occasional encounters with healthcare providers, to continuous healthcare, i.e., health monitoring and care, potentially anytime and anywhere! Continuous healthcare certainly has the potential to create a shift in the current care continuum from a treatmentbased healthcare to a more prevention-based system. At a first glance this seems like a distant goal but many health problems can be prevented by a healthy life style and the early detection of disease onset, in combination with early intervention. However, the full potential remains to be unlocked as a 2012 Pew Research Center study about mobile health reveals . While about half of smartphone owners use their phone to look up health information, only one in five smartphone users owns a health app. Currently, this exciting field is in a flux and opportunities, challenges, and crucial factors for the widespread adoption are discussed in current research BIB002 - BIB003 .
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> A. Technological Basis <s> Body Area Networks integrated into mHealth systems are becoming a mature technology with unprecedented opportunities for personalized health monitoring and management. Potential applications include early detection of abnormal conditions, supervised rehabilitation, and wellness management. Such integrated mHealth systems can provide patients with increased confidence and a better quality of life, and promote healthy behavior and health awareness. Automatic integration of collected information and user's inputs into research databases can provide medical community with opportunity to search for personalized trends and group patterns, allowing insights into disease evolution, the rehabilitation process, and the effects of drug therapy. A new generation of personalized monitoring systems will allow users to customize their systems and user interfaces and to interact with their social networks. With emergence of first commercial body area network systems, a number of system design issues are still to be resolved, such as seamless integration of information and ad-hoc interaction with ambient sensors and other networks, to enable their wider acceptance. In this paper we present state of technology, discuss promising new trends, opportunities and challenges of body area networks for ubiquitous health monitoring applications. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> A. Technological Basis <s> This work demonstrates that a full laboratory-quality immunoassay can be run on a smartphone accessory. This low-cost dongle replicates all mechanical, optical, and electronic functions of a laboratory-based enzyme-linked immunosorbent assay (ELISA) without requiring any stored energy; all necessary power is drawn from a smartphone. Rwandan health care workers used the dongle to test whole blood obtained via fingerprick from 96 patients enrolling into care at prevention of mother-to-child transmission clinics or voluntary counseling and testing centers. The dongle performed a triplexed immunoassay not currently available in a single test format: HIV antibody, treponemal-specific antibody for syphilis, and nontreponemal antibody for active syphilis infection. In a blinded experiment, health care workers obtained diagnostic results in 15 min from our triplex test that rivaled the gold standard of laboratory-based HIV ELISA and rapid plasma reagin (a screening test for syphilis), with sensitivity of 92 to 100% and specificity of 79 to 100%, consistent with needs of current clinical algorithms. Patient preference for the dongle was 97% compared to laboratory-based tests, with most pointing to the convenience of obtaining quick results with a single fingerprick. This work suggests that coupling microfluidics with recent advances in consumer electronics can make certain laboratory-based diagnostics accessible to almost any population with access to smartphones. <s> BIB002 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> A. Technological Basis <s> Sickle cell disease affects 25% of people living in Central and West Africa and, if left undiagnosed, can cause life threatening “silent” strokes and lifelong damage. However, ubiquitous testing procedures have yet to be implemented in these areas, necessitating a simple, rapid, and accurate testing platform to diagnose sickle cell disease. Here, we present a label-free, sensitive, and specific testing platform using only a small blood sample (<1 μl) based on the higher density of sickle red blood cells under deoxygenated conditions. Testing is performed with a lightweight and compact 3D-printed attachment installed on a commercial smartphone. This attachment includes an LED to illuminate the sample, an optical lens to magnify the image, and two permanent magnets for magnetic levitation of red blood cells. The sample is suspended in a paramagnetic medium with sodium metabisulfite and loaded in a microcapillary tube that is inserted between the magnets. Red blood cells are levitated in the magnetic field based on equilibrium between the magnetic and buoyancy forces acting on the cells. Using this approach, we were able to distinguish between the levitation patterns of sickle versus control red blood cells based on their degree of confinement. <s> BIB003
|
The technological basis of mHealth includes smart sensors, smart apps and devices, advanced telemedicine networks such as the optimized care network 7 and supporting software platforms. There is a broad range of new devices that have entered the market: smartphones, smart watches, smart wrist bands, smart head sets, and Google Glass, among others. In the future, patient consumers might use a number of different devices that measure a multitude of different signals: "headsets that measure brain activity, chest bands for cardiac monitoring, motion sensors for seniors living alone, remote glucose monitors for diabetes patients, and smart diapers to detect urinary tract infections" [11] . A Body Area Networks (BAN) is another form of a technological enabler with sensors that measure physiological signals, physical activities, or environmental parameters and come along with an internet-like infrastructure. BANs are, e.g., used to monitor cardiac patients and help to diagnose cardiac arrhythmias BIB001 . Add-ons to mobile devices such as lab-on-a-chip technologies are particularly interesting technologies and might represent a new form of point-of-care devices. Laksanasopin et al. BIB002 present a laboratory-quality immunoassay that can be run on a smartphone accessory and Knowlton et al. BIB003 present a 3-D printed attachment for a smartphone for the detection of sickle cell disease. Especially for developing countries with a limited infrastructure, the potential of such technologies is tremendous. From an engineering perspective, continuous healthcare is related to condition monitoring and predictive maintenance, enabled by smart sensors, connectivity, and analytics-a combination often referred to as the Internet of Things (IoT). By measuring and aggregating the signals of many different persons, machine learning 9 http://www.alivecor.com/ 10 http://w w w.roche.com/media/store/roche_stories/rochestories-2015-08-10.htm 11 http://www.ctti-clinicaltrials.org/ algorithms can be trained to detect, e.g., anomalies and unexpected correlations that might generate new insights. Open source initiatives such as the Open mHealth initiative are important enablers that could pave the way to overcome the data integration challenge.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Use Case Types <s> Body Area Networks integrated into mHealth systems are becoming a mature technology with unprecedented opportunities for personalized health monitoring and management. Potential applications include early detection of abnormal conditions, supervised rehabilitation, and wellness management. Such integrated mHealth systems can provide patients with increased confidence and a better quality of life, and promote healthy behavior and health awareness. Automatic integration of collected information and user's inputs into research databases can provide medical community with opportunity to search for personalized trends and group patterns, allowing insights into disease evolution, the rehabilitation process, and the effects of drug therapy. A new generation of personalized monitoring systems will allow users to customize their systems and user interfaces and to interact with their social networks. With emergence of first commercial body area network systems, a number of system design issues are still to be resolved, such as seamless integration of information and ad-hoc interaction with ambient sensors and other networks, to enable their wider acceptance. In this paper we present state of technology, discuss promising new trends, opportunities and challenges of body area networks for ubiquitous health monitoring applications. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Use Case Types <s> The author presents information on mobile health (mHealth) technology and argues that mHealth applications could be used for biomedical research as of June 2012. Topics include cell phones and the wireless transmission of health data, the cost-effectiveness of health assessments with mobile devices, and how health practitioners and software developers should collaborate to create innovative mHealth technologies. <s> BIB002
|
1) Disease Prevention: Smartphones are increasingly being used for measuring, managing, and displaying health and lifestyle related parameters such as weight, physical activity, smoking, and diabetes, among others. Improving lifestyle and fitness of the general population has the potential to reduce healthcare costs dramatically, and thus this type of health monitoring might have dramatic positive impact on both population health and healthcare cost. In a recent statement, the American Heart Association (AHA) reviewed the current use of mobile technologies to reduce cardiovascular disease (CVD) risk behavior . CVD continues to be the leading cause of death, disability, and high healthcare costs and is thus a prime example for investigating the potential of mHealth technologies. The work investigates different tools available to consumers to prevent CVD ranging from text messages (e.g., smoking cessation support), wearable sensors, and other smartphone applications. While more evidence and studies are needed, it appears that mHealth in CVD prevention is promising. The AHA strongly encourages more research. 2) Early Detection: Many diseases can be treated best when discovered early and before they cause serious health consequences. Early detection can happen at the population level or at the individual level. Collins BIB002 highlights an early warning system for disease outbreaks caused by illness-related parameters such as environmental exposure or infectious agents. On the individual level, the previously mentioned BAN is a major enabler for early detection of abnormalities. So-called smart alarms can be understood as another form of early detection on the individual level. Smart alarms cover a range of applications and are especially relevant to the elderly and monitor heart activity, breathing, to potential falls BIB001 . The company AliveCor 9 is offering a mobile ECG that is attached to a mobile device (either a smartphone or a tablet). The attached device creates an ECG that is then recorded via an app. The mobile ECG is cleared by the FDA and can also detect atrial fibrillation, a leading cause of mortality and morbidity. AliveCor states that the device has been used to record over five million ECGs. These data are then the basis for training an anomaly detection algorithm. 3) Disease Management: Healthcare costs can be reduced when the patient can be monitored at home instead of in the clinic and if physicians can optimize care without the need to call in the patients for a medical visit. Some hospitals and clinics collect continuous data on various health parameters as part of research studies [11] . Especially the management of chronic diseases can benefit from continuous healthcare. In a recent review , Hamine et al. screen systematically for randomized clinical trials that give evidence about better treatment adherence when using mHealth technologies. The type of applications range from simple SMS services to video messaging with smartphones and other wireless devices. They conclude that there is, without doubt, high potential for these technologies but, as the evidence in the trials was mixed, further research is needed to improve usability, feasibility, and acceptability. 4) Support of Translational Research: With hundreds of millions of smartphones in use around the world, the way patients are recruited to participate in clinical studies might change dramatically. In the future, patients might be able to decide themselves if they want to participate in a medical study and they might be able to specify how their data are used and shared with others. Major research institutions have already developed apps for studies involving asthma, breast cancer, cardiovascular disease, diabetes, and Parkinson's disease. One interesting use case is the control of disease endpoints in clinical trials with mHealth technologies. As a concrete example, Roche developed an app to control or measure the clinical endpoints of Parkinson's disease. The app, which complements the traditional physician-led assessment, is currently used in a Phase I trial to measure in a continuous way disease and symptom severity. The app is based on the Unified Parkinson's Disease Rating Scale (UPDRS) which is the traditional measurement for the disease and symptom severity. The test, which takes about 30 s, investigates six endpoint-relevant parameters such as a voice test, balance test, gain test, dexterity test, rest tremor tests, and postural tremor. The Clinical Trials Transformation Initiative, 11 an association representing diverse stakeholders along the clinical trial space, works on the next generation of clinical trials. Recently, the initiative has launched a mobile clinical trials program to investigate how mobile technologies and other off-site remote technologies can further facilitate clinical trials.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> E. Regulatory Implications <s> Mobile health (“mHealth”) is the use of portable devices such as smartphones and tablets to improve health. This report discusses the public health benefits and risks of mHealth and the challenges facing the FDA in regulating this technology. <s> BIB001
|
The continuous healthcare ecosystem has brought together stakeholders that were previously more or less unconnected and now have to interact. For instance, in the United States, certain app developers suddenly have to deal with premarket notification or so-called 510(k) clearance processes from the FDA. The driving question here is which type of mHealth applications fall under FDA's jurisdiction over medical devices. Indeed, different classifications of "mobile medical applications" according to FDA guidance now exist, but they do not appear to be finalized yet. While it is the traditional responsibility of the FDA to oversee the safety and effectiveness of medical devices (also including certain types of mobile apps), some politicians and industry representatives are afraid that innovation is hampered by regulatory oversight. However, first warning letters to doctors had to be sent out where mobile medical apps showed unexpected behavior; another case revealed that about 52 adverse event reports were generated for one specific diabetes app within two years BIB001 . Clearly, further intensive dialog between stakeholders is needed. Hamel et al. BIB001 describe in detail the challenges that come along with the regulation of mHealth technologies and provide potential alternative regulatory scenarios.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> V II. GET TING PER SONA L A. Precision Medicine Is Changing Healthcare <s> The human genome holds an extraordinary trove of information about human development, physiology, medicine and evolution. Here we report the results of an international collaboration to produce and make freely available a draft sequence of the human genome. We also present an initial analysis of the data, describing some of the insights that can be gleaned from the sequence. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> V II. GET TING PER SONA L A. Precision Medicine Is Changing Healthcare <s> Epigenetics acts as an interface between environmental / exogenous factors, cellular responses and pathological processes. Aberrant epigenetic signatures are a hallmark of complex multifactorial diseases, including non-neoplastic disorders (e.g., cardiovascular diseases, hypertension, diabetes mellitus, autoimmune diseases, and some infectious diseases) and neoplasms (e.g., leukemias, lymphomas, sarcomas, and breast, lung, prostate, liver and colorectal cancers). Epigenetic signatures (DNA methylation, mRNA and microRNA expression, etc.) may serve as biomarkers for risk stratification, early detection, and disease classification, as well as targets for therapy and chemoprevention. DNA methylation assays are widely applied to formalin-fixed paraffin-embedded archival tissue specimens as clinical pathology tests. To better understand the interplay between etiologic factors, cellular molecular characteristics, and disease evolution, the field of “Molecular Pathological Epidemiology (MPE)” has emerged as an interdisciplinary integration of “molecular pathology” and “epidemiology”, with a similar conceptual framework to systems biology and network medicine. In contrast to traditional epidemiologic research including genome-wide association studies (GWAS), MPE is founded on the unique disease principle; that is, each disease process results from unique profiles of exposomes, epigenomes, transcriptomes, proteomes, metabolomes, microbiomes, and interactomes in relation to the macro-environment and tissue microenvironment. The widespread application of epigenomics (e.g., methylome) analyses will enhance our understanding of disease heterogeneity, epigenotypes (CpG island methylator phenotype, LINE-1 hypomethylation, etc.), and host-disease interactions. MPE may represent a logical evolution of GWAS, termed “GWAS-MPE approach”. Though epigenome-wide association study attracts increasing attention, currently, it has a fundamental problem in that each cell within one individual has a unique, time-varying epigenome. This article will illustrate increasing contribution of modern pathology to broader public health sciences, which attests pivotal roles of pathologists in the new integrated MPE science towards our ultimate goal of personalized medicine and prevention. <s> BIB002
|
Maximizing the positive effect of a healthcare intervention by concurrently minimizing adverse side effects has always been the dream of individualized healthcare. Over the last decades it became clear that this goal could not be achieved with insights from conventional studies alone, which have been focusing on empirical intervention efficacy and side effects in large patient study groups. The reason is that, due to the biological diversity of individuals, environment, and pathogenesis, any incident of a complex disease is like no other. Precision medicine, personalized medicine, individualized medicine, and stratified medicine-terms we will use interchangeably-all refer to the grouping of patients based on the risk of disease, or response to therapy, using diagnostic tests. Precision medicine refers to the idea to customize healthcare, with medical decisions, practices, and procedures being tailored to a patient group. In its most extreme interpretation, this leads to the " n = 1 " principle, meaning that therapy should be tailored to the patient's individual characteristics, sometimes referred to as the "unique disease principle" BIB002 . Without question, the most important milestone for the realization of a personalized medicine was the publication of the reference sequence of the human genome about 15 years ago BIB001 , . In the following years, the patient's genomic profile, supplemented with other molecular and cellular data, became the basis for a dramatic progress in the understanding of the molecular basis of disease. The impact of this knowledge is not limited to research: As new analytical methods like next-generation sequencing (NGS) and new proteomic platforms bring cost down, molecular data will increasingly become part of clinical practice. The main goal is to link the generated data to clinically actionable information. With growing data, increasingly complex phenomena even with weak associations can be discovered and validated. As a matter of fact, research and clinical applications go along with a huge increase in the volume and variety of data available to characterize the physiology and pathophysiology. Genome-wide association studies (GWASs) with more than a million attributes collected from up to several thousands individuals are good examples. The vision of a realtime personalized healthcare is the rapid and real-time analysis of biomaterials obtained from the patients based on newest research results in a network of research labs and clinics. The insights in the biological causes of disease might lead to a more meaningful categorization of disease, at some point in the future replacing medical codes, which were mostly developed based on clinical phenotyping . By far the greatest efforts in precision medicine have been devoted to cancer (oncology), but precision medicine becomes increasingly relevant to other medical domains, e.g., the central nervous system (e.g., Alzheimer's and depression), immunology/transplant, prenatal medicine, pediatrics, asthma, infectious diseases, and CVD .
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Understanding Disease on a Molecular Level <s> The HER-2/neu oncogene is a member of the erbB-like oncogene family, and is related to, but distinct from, the epidermal growth factor receptor. This gene has been shown to be amplified in human breast cancer cell lines. In the current study, alterations of the gene in 189 primary human breast cancers were investigated. HER-2/neu was found to be amplified from 2- to greater than 20-fold in 30% of the tumors. Correlation of gene amplification with several disease parameters was evaluated. Amplification of the HER-2/neu gene was a significant predictor of both overall survival and time to relapse in patients with breast cancer. It retained its significance even when adjustments were made for other known prognostic factors. Moreover, HER-2/neu amplification had greater prognostic value than most currently used prognostic factors, including hormonal-receptor status, in lymph node-positive disease. These data indicate that this gene may play a role in the biologic behavior and/or pathogenesis of human breast cancer. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Understanding Disease on a Molecular Level <s> A strong candidate for the 17q-linked BRCA1 gene, which influences susceptibility to breast and ovarian cancer, has been identified by positional cloning methods. Probable predisposing mutations have been detected in five of eight kindreds presumed to segregate BRCA1 susceptibility alleles. The mutations include an 11-base pair deletion, a 1-base pair insertion, a stop codon, a missense substitution, and an inferred regulatory mutation. The BRCA1 gene is expressed in numerous tissues, including breast and ovary, and encodes a predicted protein of 1863 amino acids. This protein contains a zinc finger domain in its amino-terminal region, but is otherwise unrelated to previously described proteins. Identification of BRCA1 should facilitate early diagnosis of breast and ovarian cancer susceptibility in some individuals as well as a better understanding of breast cancer biology. <s> BIB002 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Understanding Disease on a Molecular Level <s> Many examples highlight the power of gene expression profiles, or signatures, to inform an understanding of biological phenotypes. This is perhaps best seen in the context of cancer, where expression signatures have tremendous power to identify new subtypes and to predict clinical outcomes. Although the ability to interpret the meaning of the individual genes in these signatures remains a challenge, this does not diminish the power of the signature to characterize biological states. The use of these signatures as surrogate phenotypes has been particularly important, linking diverse experimental systems that dissect the complexity of biological systems with the in vivo setting in a way that was not previously feasible. <s> BIB003 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Understanding Disease on a Molecular Level <s> This year, more than 1 million Americans and more than 10 million people worldwide are expected to be diagnosed with cancer, a disease commonly believed to be preventable. Only 5–10% of all cancer cases can be attributed to genetic defects, whereas the remaining 90–95% have their roots in the environment and lifestyle. The lifestyle factors include cigarette smoking, diet (fried foods, red meat), alcohol, sun exposure, environmental pollutants, infections, stress, obesity, and physical inactivity. The evidence indicates that of all cancer-related deaths, almost 25–30% are due to tobacco, as many as 30–35% are linked to diet, about 15–20% are due to infections, and the remaining percentage are due to other factors like radiation, stress, physical activity, environmental pollutants etc. Therefore, cancer prevention requires smoking cessation, increased ingestion of fruits and vegetables, moderate use of alcohol, caloric restriction, exercise, avoidance of direct exposure to sunlight, minimal meat consumption, use of whole grains, use of vaccinations, and regular check-ups. In this review, we present evidence that inflammation is the link between the agents/factors that cause cancer and the agents that prevent it. In addition, we provide evidence that cancer is a preventable disease that requires major lifestyle changes. <s> BIB004 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Understanding Disease on a Molecular Level <s> The past five years have seen many scientific and biological discoveries made through the experimental design of genome-wide association studies (GWASs). These studies were aimed at detecting variants at genomic loci that are associated with complex traits in the population and, in particular, at detecting associations between common single-nucleotide polymorphisms (SNPs) and common diseases such as heart disease, diabetes, auto-immune diseases, and psychiatric disorders. We start by giving a number of quotes from scientists and journalists about perceived problems with GWASs. We will then briefly give the history of GWASs and focus on the discoveries made through this experimental design, what those discoveries tell us and do not tell us about the genetics and biology of complex traits, and what immediate utility has come out of these studies. Rather than giving an exhaustive review of all reported findings for all diseases and other complex traits, we focus on the results for auto-immune diseases and metabolic diseases. We return to the perceived failure or disappointment about GWASs in the concluding section. <s> BIB005 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Understanding Disease on a Molecular Level <s> We analysed primary breast cancers by genomic DNA copy number arrays, DNA methylation, exome sequencing, messenger RNA arrays, microRNA sequencing and reverse-phase protein arrays. Our ability to integrate information across platforms provided key insights into previously defined gene expression subtypes and demonstrated the existence of four main breast cancer classes when combining data from five platforms, each of which shows significant molecular heterogeneity. Somatic mutations in only three genes (TP53, PIK3CA and GATA3) occurred at >10% incidence across all breast cancers; however, there were numerous subtype-associated and novel gene mutations including the enrichment of specific mutations in GATA3, PIK3CA and MAP3K1 with the luminal A subtype. We identified two novel protein-expression-defined subgroups, possibly produced by stromal/microenvironmental elements, and integrated analyses identified specific signalling pathways dominant in each molecular subtype including a HER2/phosphorylated HER2/EGFR/phosphorylated EGFR signature within the HER2-enriched expression subtype. Comparison of basal-like breast tumours with high-grade serous ovarian tumours showed many molecular commonalities, indicating a related aetiology and similar therapeutic opportunities. The biological finding of the four main breast cancer subtypes caused by different subsets of genetic and epigenetic abnormalities raises the hypothesis that much of the clinically observable plasticity and heterogeneity occurs within, and not across, these major biological subtypes of breast cancer. <s> BIB006 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Understanding Disease on a Molecular Level <s> The successful completion of the Human Genome Project (HGP) was an unprecedented scientific advance that has become an invaluable resource in the search for genes that cause monogenic and common (polygenic) diseases. Prior to the HGP, linkage analysis had successfully mapped many disease genes for monogenic disorders; however, the limitations of this approach were particularly evident for identifying causative genes in rare genetic disorders affecting lifespan and/or reproductive fitness, such as skeletal dysplasias. In this review, we illustrate the challenges of mapping disease genes in such conditions through the ultra-rare disorder fibrodysplasia ossificans progressiva (FOP) and we discuss the advances that are being made through current massively parallel (“next generation”) sequencing (MPS) technologies. <s> BIB007 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> B. Understanding Disease on a Molecular Level <s> BACKGROUND: The development of minimally-invasive methods to detect and monitor tumors continues to be a major challenge in oncology. We used digital PCR-based technologies to evaluate the ability of circulating tumor DNA (ctDNA) to detect tumors in 640 patients with various cancer types. In particular we studied the plasma of 14 medulloblastoma, 13 WHO grade 2-3 glioma and 14 WHO grade IV astrocytoma cases for levels of ctDNA. METHODS: The basis of our approach is to differentiate DNA shed by normal cells from DNA derived from tumor cells. In order to distinguish the two populations of cell-free DNA, we first identify a tumor-specific alteration. We then query for that exact mutation in matching plasma from the same patient to generate a personalized tumor biomarker. Only DNA derived from the tumor will harbor the genetic alteration. We initially use targeted, exomic, or whole genome sequencing to identify sequence or structural alterations in tumor tissues of 410 individuals. DNA was extracted from less than 5 ml of plasma in each case. The majority of plasma samples were queried for levels of ctDNA using a high fidelity next-generation sequencing approach coined Safe-SeqS. RESULTS: We found that at least one tumor-specific mutant molecule could be identified in 75% of patients with advanced ovarian, colorectal, bladder, gastroesophoageal, pancreatic, breast, melanoma, hepatocellular and head and neck cancers, but in less than 50% of primary brain, renal, prostate, or thyroid cancers. Approximately 40% of medulloblastoma and 10% of low or high grade glioma cases had detectable levels of ctDNA. In patients with localized non-CNS tumors, ctDNA was detected in 73%, 57%, 48% and 50% of patients with colorectal cancer, gastroesophageal cancer, pancreatic cancer, and breast adenocarcinoma, respectively. Finally, we assessed whether ctDNA could provide clues into the mechanisms underlying resistance to epidermal growth factor receptor (EGFR) blockade in 24 colorectal cancer patients who objectively responded to therapy but who subsequently relapsed. Twenty-three (96%) of these patients developed one or more mutations in genes involved in the mitogen-activated protein kinase (MAPK) pathway. CONCLUSIONS: Taken together, these data suggest that ctDNA is a sensitive, specific and robust biomarker that can be used for a variety of clinical and research purposes in patients with several multiple different types of cancer. For individuals with CNS neoplasms, alternate strategies may need to be developed in order to detect cell-free tumor derived DNA at levels that are clinically meaningful. ABSTRACT CATEGORY: Neuropathology & Tumor Biomarkers. <s> BIB008
|
In the last decades, a lot of attention has been focusing on understanding the genetic causes of disease. Monogenetic disorders with a high penetrance have been linked to mutations of single genes in inherited genes. The causative genes of most monogenic genetic disorders have now been identified BIB007 . Monogenetic diseases are relatively rare and attention has shifted largely to complex diseases: Most common diseases, including most forms of cancer, are based on an interaction of several factors, including a number of inherited genetic variations, one or several mutations acquired during cell lifetime, as well as environmental factors. Consider, for example, that worldwide approximately 18% of cancers are related to infectious diseases BIB004 . Due to the complex interplay of several factors, these diseases show what has been termed "missing heritability." Insights into inherited genetic cell disorders are obtained from germline DNA, typically obtained from blood cells. GWASs examine the correlation between germline genetic variations and common phenotypic characteristics, such as breast cancer BIB005 . With the establishment of next generation sequencing (NGS), in the future the whole genome might be decoded for costs in the order of a few hundred U.S. dollars and this will make genome analysis much more common. Eventually, the increasing use of genome sequencing will lead to better insights into which diseases can be explained by genetic variance and could revolutionize molecular medicine for some diseases. The likelihood of a person developing a disease in their lifetime can sometimes be predicted according to germline DNA profiles, permitting early intervention and possibly preventing an outbreak of the disease. Additional genetic variations of interest are those acquired during the lifetime of somatic cells, which comprise all cells that form an organism's body, excluding the germ cells. As genetic alterations accumulate, the somatic cell can turn into a malignant cell and form a cancerous tumor. Genetic profiles (mutations and amplifications) of somatic cancer cells are obtained from analyses of tumor biopsies. Their distinct mutations and gene amplification patterns are linked to many clinically relevant characteristics, such as prognosis or therapy response BIB006 . In some cases, the tumor is easily accessible, however, in other cases like tumors or metastases of certain organs (e.g., brain, liver, lung), a biopsy is not standard of care. In those cancer patients, the access to the material from which the genomic information could be obtained is difficult. Recently, novel methods have been developed that permit the analysis of alternative sources of tumor material, such as circulating tumor cells (CTCs). These are cancer cells that have shed into the blood stream from a primary tumor. CTCs can constitute seeds for subsequent growth of additional tumors (metastasis) in distant organs, triggering a mechanism that is responsible for the vast majority of cancer-related deaths. Thus, CTC analysis could be considered a "liquid biopsy." Also circulating tumor DNA (ctDNA) was found to resemble the tumor's genomic profile, being useful for cancer detection and prediction of therapy efficacy BIB008 . So far we have been focusing on DNA. The transcription of RNA from DNA is called gene expression. This step plays a crucial functional role, because RNA is translated directly into functional proteins. Furthermore, RNA has regulatory functions, of which many are not yet understood. In some cancers, such as breast cancer, the expression of This is a test that uses antibodies and color change to identify a substance. http://www.genome.gov/27541319 17 http://www.mayoclinic.org/diseases-conditions/alzheimers-disease/basics/causes/con-20023871 some genes has already been proven to be of great clinical relevance. Even genomewide gene expression analyzes are becoming available to characterize cancer diseases BIB003 . Transcriptomics is the study of transcriptomes (RNA molecules, including mRNA, miRNA, rRNA, tRNA, and other noncoding RNA), their structures, and functions. DNA microarrays (which, despite their name, really test for RNA) and RNA-seq (RNA sequencing) can reveal a snapshot of RNA presence and quantify cellular activities at a given moment in time. Whereas the genome contains the code, the proteins are the body's functional worker molecules. Several methods like immunohistochemistry and enzyme-linked immunosorbent assays (ELISA) are used in clinical practice for protein analysis. In research and recently also in clinical tests, mass spectroscopy is used to determine many proteins in a tissue, opening this field for high throughput and big data approaches . Increasingly also protein microarrays are used as a high-throughput method to track the interactions and activities of many proteins at a time. While the transformation of genetic information into functional proteins is recognized as being clinically highly relevant, the clinical relevance of other "omics" fields is still under investigation. Epigenomics, metabolomics, and lipidomics are three further levels of systems biology which might be unraveled by big data analyses. Epigenetic changes modify genes on a molecular level, such that expression is altered; the effects of these modifications are still largely unclear. Metabolomics concerns chemical fingerprints that specific cellular processes leave behind, in particular, the study of their smallmolecule metabolite. Lipidomics focuses on cellular lipids, including the modifications made to a particular set of lipids, produced by an organism or a system. The environment is increasing the number of possible interactions that play a role in the etiology (i.e., disease cause) and pathogenesis of a disease. The exposome encompasses the totality of human environmental (i.e., nongenetic) exposures from conception onwards, complementing the genome. For example, scientists believe that, for most people, Alzheimer's disease results from a combination of genetic, lifestyle, and environmental factors that affect the brain over time. Only in less than 5% of cases, Alzheimer's is caused by specific genetic changes that, by themselves, virtually guarantee a person will develop the disease. As a medical field, molecular medicine is concerned with the molecular and genetic problems that lead to diseases and with the development of molecular interventions to correct them. A better understanding of the underlying molecular mechanisms of diseases can lead to great advances in diagnostics and therapy. In particular, cancer subgroups can be determined by omics profiles and the most effective treatment with smallest adverse effects can be determined for each subgroup. This concept is at the center of precision medicine. To give insight in what is clinically relevant today, consider the concrete example of breast cancer. Molecular techniques have changed our understanding of the basic biology of breast cancer and provide the foundation for new methods of "personalized" prognostic and predictive testing. Several molecular markers are already established in clinical practice such as high penetrance breast cancer causing genes (BRCA1 and BRCA2) BIB002 , . Also the characterization of the tumor is driven by molecular markers such as estrogen receptor, progesterone receptor, and a genetic alteration, the HER2 amplification BIB001 . Since the biological signals of those markers are quite strong, they were discovered already in the 1990s, even before high-throughput molecular analysis became a reality. Now, more than 15 years after the primary publication of the human genome, many levels of biology (DNA, RNA, Protein, Epigenetics, miRNA, etc.) can be analyzed at relatively low cost, revealing detailed and comprehensive insight into the biology of a cell, including single gene functions and pathways as an interaction of whole groups of proteins and regulatory mechanisms. The Cancer Genome Atlas plays a particular role in understanding breast cancer on the molecular level play the efforts around "The Cancer Genome Atlas" (TCGA). It was one of the first Big Data efforts that compared the genetic information of the tumor with the genetic information of the blood on a large scale for each single of the three billion base pairs. See also Section VII-E. This project could, for the first time, describe systematically, which genes will mutate in the course of the pathogenesis of a healthy mammary cell to a breast cancer cell BIB006 .
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> C. Molecular Diagnostics and Drug Therapy <s> Abstract Pharmacogenetics encompasses the involvement of genes in an individual's response to drugs. As such, the field covers a vast area including basic drug discovery research, the genetic basis of pharmacokinetics and pharmacodynamics, new drug development, patient genetic testing and clinical patient management. Ultimately, the goal of pharmacogenetics is to predict a patient's genetic response to a specific drug as a means of delivering the best possible medical treatment. By predicting the drug response of an individual, it will be possible to increase the success of therapies and reduce the incidence of adverse side effects. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> C. Molecular Diagnostics and Drug Therapy <s> New drug development costs between 500 million and 2 billion dollars and takes 10-15 years, with a success rate of less than 10%. Drug repurposing is the process of discovering new indications for existing drugs and is becoming an important component of drug development as success rates for novel drugs in clinical trials decrease and costs increase. In the period 2007-2009, drug repurposing led to the launching of 30-40% of new drugs. Typically, a new indication for an available drug is identified by accident. However, new technologies and a huge amount of available resources enable us to develop systematic approaches to identify and validate drug repurposing candidates with significantly lower cost. A variety of resources have been utilized to identify novel drug repurposing candidates such as biomedical literature, clinical notes, and genetic data. In this study, we plan to 1) assess the usability and usefulness of new resources, specifically social media and phenome-wise association studies in drug repurposing, and 2) improve some previous proposed approaches, by investigating more accurate methods to prioritize and rank the generated drug repurposing candidates by literature-based discovery. <s> BIB002 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> C. Molecular Diagnostics and Drug Therapy <s> BACKGROUND ::: BRAF V600 mutations occur in various nonmelanoma cancers. We undertook a histology-independent phase 2 "basket" study of vemurafenib in BRAF V600 mutation-positive nonmelanoma cancers. ::: ::: ::: METHODS ::: We enrolled patients in six prespecified cancer cohorts; patients with all other tumor types were enrolled in a seventh cohort. A total of 122 patients with BRAF V600 mutation-positive cancer were treated, including 27 patients with colorectal cancer who received vemurafenib and cetuximab. The primary end point was the response rate; secondary end points included progression-free and overall survival. ::: ::: ::: RESULTS ::: In the cohort with non-small-cell lung cancer, the response rate was 42% (95% confidence interval [CI], 20 to 67) and median progression-free survival was 7.3 months (95% CI, 3.5 to 10.8). In the cohort with Erdheim-Chester disease or Langerhans'-cell histiocytosis, the response rate was 43% (95% CI, 18 to 71); the median treatment duration was 5.9 months (range, 0.6 to 18.6), and no patients had disease progression during therapy. There were anecdotal responses among patients with pleomorphic xanthoastrocytoma, anaplastic thyroid cancer, cholangiocarcinoma, salivary-duct cancer, ovarian cancer, and clear-cell sarcoma and among patients with colorectal cancer who received vemurafenib and cetuximab. Safety was similar to that in prior studies of vemurafenib for melanoma. ::: ::: ::: CONCLUSIONS ::: BRAF V600 appears to be a targetable oncogene in some, but not all, nonmelanoma cancers. Preliminary vemurafenib activity was observed in non-small-cell lung cancer and in Erdheim-Chester disease and Langerhans'-cell histiocytosis. The histologic context is an important determinant of response in BRAF V600-mutated cancers. (Funded by F. Hoffmann-La Roche/Genentech; ClinicalTrials.gov number, NCT01524978.). <s> BIB003 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> C. Molecular Diagnostics and Drug Therapy <s> The widespread dissemination of the idea of personalized oncology has spread faster than the underlying science. The authors argue that the principles of clinical investigation need to be applied to address the many unanswered questions. <s> BIB004 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> C. Molecular Diagnostics and Drug Therapy <s> Deep learning is rapidly advancing many areas of science and technology with multiple success stories in image, text, voice and video recognition, robotics, and autonomous driving. In this paper we demonstrate how deep neural networks (DNN) trained on large transcriptional response data sets can classify various drugs to therapeutic categories solely based on their transcriptional profiles. We used the perturbation samples of 678 drugs across A549, MCF-7, and PC-3 cell lines from the LINCS Project and linked those to 12 therapeutic use categories derived from MeSH. To train the DNN, we utilized both gene level transcriptomic data and transcriptomic data processed using a pathway activation scoring algorithm, for a pooled data set of samples perturbed with different concentrations of the drug for 6 and 24 hours. In both pathway and gene level classification, DNN achieved high classification accuracy and convincingly outperformed the support vector machine (SVM) model on every multiclass classification prob... <s> BIB005
|
The alignment of clinical and molecular data in integrative data systems and improvements using these data for disease understanding and patient treatment will be among the next great challenges. The need for a precision medicine is quite apparent when looking at the limited drug response rates from the early 2000s as published research reveals BIB001 . Thus, Reality is even more complex: there is also heterogeneity within a particular tumor. The hypothesized cancer stem cell model asserts that within a population of tumor cells, there is only a small subset of cells that are tumourigenic (able to form tumours). These cells are termed cancer stem cells (CSCs), and are marked by their ability to both self-renew and differentiate into nontumourigenic progeny. One assumes a process of natural selection in the cancer model, one assumes a process of natural selection within a given tumor which also would explain why cancer is so difficult to fight: a treatment might eliminate one strain giving room for another strain to develop. It has been argued that this could be a major problem for the vision on a personalized medicine BIB004 . An alternative but related explanation is the clonal evolution model . http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/ InVitroDiagnostics/ucm407297.htm alternatives to the traditional "blockbuster" models are needed . The diagnostic part of precision medicine relies heavily on biomarkers. In molecular diagnostics, the term biomarker refers to any of a patient's molecules that can be measured to assess health and that can be obtained from blood, body fluids, or tissue. Biomarkers are of central importance and biomarker testing is at the center of personalized medicine and is specific, e.g., to DNA, RNA, or protein variations. Biomarkers may also test if certain proteins may be overactive, in particular, if they help to promote cancer growth. A companion diagnostic is a diagnostic test (biomarker) used as a companion to a therapeutic drug to determine its applicability, e.g., efficacy and safety, to a specific patient. Companion diagnostics are co-developed with drugs to aid in selecting or excluding patient groups for treatment. A therapy may be based on the identification of a molecule (a drug target), often a protein, whose activity needs to be modified by a drug. Pharmaceutical research tries to find drugs, so called targeted drugs, that bind the target with the goal to influence underlying disease mechanisms. Targeted therapy uses a number of different strategies to fight tumors. For example, antibodies might be generated (e.g., monoclonal antibodies) which are man-made versions of large immune system proteins that bind to very specific target proteins on cancer cell membranes. Some targeted drugs block (inhibit) proteins that are signals for cancer cells to grow. Drugs called angiogenesis inhibitors stop tumors from making new blood vessels, which greatly limits their growth. Immunotherapy is a treatment that uses the body's own immune system to help fight cancer, e.g., uses the patient's immune system to attack those cells. For example, the protein HER2 is a member of the human epidermal growth factor receptor family and its overexpression plays an important role in certain forms of breast cancer; HER2 is the target of the monoclonal antibody trastuzumab. While most drugs have been approved for very specific diseases, they might also sometimes be effective in other diseases. One reason is that the targets in both diseases might have the same alterations. The application of known drugs and compounds to treat new indications is called drug repurposing. Analytics can play a role in finding good candidates BIB002 , BIB005 . A well-known case is the pain medicine Aspirin, which was found to be effective in treating and preventing heart disease. In cancer, as another example, it could be shown that a drug that works against a mutated gene in melanoma is also active in other cancers if the respective mutation in BRAF is found BIB003 . The main advantage of drug repositioning over traditional drug development is that, since the repositioned drug has already passed a significant number of toxicity and other tests, its safety is known and the risk of failure for reasons of adverse toxicology is reduced. Thus, the introduction of a specific drug for a new disease is greatly simplified.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> D. Implementing Precision Medicine <s> Over the last decade there has been vast interest in and focus on the implementation of personalized genomic medicine. Although there is general agreement that personalized genomic medicine involves utilizing genome technology to assess individual risk and ensure the delivery of the “right treatment, for the right patient, at the right time,” different categories of stakeholders focus on different aspects of personalized genomic medicine and operationalize it in diverse ways. In order to move toward a clearer, more holistic understanding of the concept, this article begins by identifying and defining three major elements of personalized genomic medicine commonly discussed by stakeholders: molecular medicine, pharmacogenomics, and health information technology. The integration of these three elements has the potential to improve health and reduce health care costs, but it also raises many challenges. This article endeavors to address these challenges by identifying five strategic areas that will require significant investment for the successful integration of personalized genomics into clinical care: (1) health technology assessment; (2) health outcomes research; (3) education (of both health professionals and the public); (4) communication among stakeholders; and (5) the development of best practices and guidelines. While different countries and global regions display marked heterogeneity in funding of health care in the form of public, private, or blended payor systems, previous analyses of personalized genomic medicine and attendant technological innovations have been performed without due attention to this complexity. Hence, this article focuses on personalized genomic medicine in the United States as a model case study wherein a significant portion of health care payors represent private, nongovernment resources. Lessons learned from the present analysis of personalized genomic medicine could usefully inform health care systems in other global regions where payment for personalized genomic medicine will be enabled through private or hybrid public-private funding systems. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> D. Implementing Precision Medicine <s> The completion of the Human Genome Project in 2003 was surrounded by lots of excitement in the scientific and lay communities because it was a milestone, along with other advancements in technology that have revolutionized our understanding of the contributions of genetic variability in shaping health and disease. One mystery the Human Genome Project helped scientists and clinicians unravel from a health perspective was why some patients responded differently to medications from the rest of the general population. Pharmacogenomics is the study of how genes influence an individual’s response to medications. The term pharmacogenomics is often used interchangeably with the term pharmacogenetics, which usually refers to how polymorphisms in a single gene influence response to a single medication. For more than 150 FDA-approved drugs, pharmacogenomic information can be found in the product labeling describing risk for adverse drug events, genotype-specific dosing, and/or variations in pharmacokinetic and pharmacodynamic parameters. For a select group of medications, such as codeine and clopidogrel, pharmacogenomic information may even be highlighted in a black box warning further emphasizing the important role of our unique genetic makeup in response to medications. Inherited genome variations influence the function of gene products that determine the pharmacokinetic and pharmacodynamic properties of a particular medication. In cancer, somatically acquired genomic variations and inherited genome variations influence response to anticancer agents. In infectious diseases, genomic variations in the bacteria or virus influence antimicrobial sensitivity. Pharmacogenomic research endeavors have sought to uncover the relationship between treatment response and genomic differences since it was first characterized in the 1950s by Sir Archibald Garrod, and the term was coined in 1959 by Friedrich Vogel. Some early pharmacogenomic examples include NAT2 gene deficiency and isoniazid-induced neuropathy, G6PD gene deficiency and primaquine-induced acute hemolytic crisis, and BChE gene deficiency resulting in succinylcholine-induced prolonged apnea. The translation of these findings and others into clinical practice in a sustainable and scalable model is more of a recent initiative to further optimize patient care. <s> BIB002
|
As a major milestone, a first insurer has begun to cover the cost of the sequencing of the full germline and tumor genomes of cancer patients. Despite the great perspectives of precision medicine it still faces many challenges. The implementation will require changes and improvements on many levels, reaching from technology developments (one genome can comprise up to 400 GB of data) over social and ethical challenges to legal implications and the need for large-scale educational programs for patients, physicians, researchers, healthcare providers, insurance companies, and even politicians BIB002 . The abundance of data and possibilities to join information sources raises the question on whether current rules for intellectual property, reimbursement, and personal privacy have to be adapted to personalized medicine. Regulatory authorities have already acknowledged those challenges and released a report titled "Paving the Way for Personalized Medicine: FDA's role in a New Era of Medical Product Development" BIB001 . In this report, the FDA describes a framework of how to integrate genomic medicine into clinical practice and drug development. Steps to implement precision medicine include the development of regulatory scientific standards, research methods, reference material, and new tools BIB001 . Implementing and even commercializing precision medicine will demand new standards with regard to the protection of patients' privacy and that of their families. Issues arise especially for healthy individuals who have genetic predisposition for a disease or patients who have a genetic alteration (either germline or somatic) and who are thought to be nonresponsive to standard treatments. Until a clear benefit for those persons is established, these data will have to be protected. In some cases, the person, for which the molecular data were created, might not want to know the complete interpretation of those results. An important milestone regarding privacy issues in the U.S. was the Genetic Information Nondiscrimination Act (GINA) in 2008 that protects American citizens from being discriminated based on their genetic information with respect to employment and health insurance.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> E. Big Data in Molecular Research <s> Biobanks correspond to different situations: research and technological development, medical diagnosis or therapeutic activities. Their status is not clearly defined. We aimed to investigate human biobanking in Europe, particularly in relation to organisational, economic and ethical issues in various national contexts. Data from a survey in six EU countries (France, Germany, the Netherlands, Portugal, Spain and the UK) were collected as part of a European Research Project examining human and non-human biobanking (EUROGENBANK, coordinated by Professor JC Galloux). A total of 147 institutions concerned with biobanking of human samples and data were investigated by questionnaires and interviews. Most institutions surveyed belong to the public or private non-profit-making sectors, which have a key role in biobanking. This activity is increasing in all countries because few samples are discarded and genetic research is proliferating. Collections vary in size, many being small and only a few very large. Their purpose is often research, or research and healthcare, mostly in the context of disease studies. A specific budget is very rarely allocated to biobanking and costs are not often evaluated. Samples are usually provided free of charge and gifts and exchanges are the common rule. Good practice guidelines are generally followed and quality controls are performed but quality procedures are not always clearly explained. Associated data are usually computerised (identified or identifiable samples). Biobankers generally favour centralisation of data rather than of samples. Legal and ethical harmonisation within Europe is considered likely to facilitate international collaboration. We propose a series of recommendations and suggestions arising from the EUROGENBANK project. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> E. Big Data in Molecular Research <s> Breast cancer is the most common cancer among women. Common variants at 27 loci have been identified as associated with susceptibility to breast cancer, and these account for ∼9% of the familial risk of the disease. We report here a meta-analysis of 9 genome-wide association studies, including 10,052 breast cancer cases and 12,575 controls of European ancestry, from which we selected 29,807 SNPs for further genotyping. These SNPs were genotyped in 45,290 cases and 41,880 controls of European ancestry from 41 studies in the Breast Cancer Association Consortium (BCAC). The SNPs were genotyped as part of a collaborative genotyping experiment involving four consortia (Collaborative Oncological Gene-environment Study, COGS) and used a custom Illumina iSelect genotyping array, iCOGS, comprising more than 200,000 SNPs. We identified SNPs at 41 new breast cancer susceptibility loci at genome-wide significance (P < 5 × 10(-8)). Further analyses suggest that more than 1,000 additional loci are involved in breast cancer susceptibility. <s> BIB002 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> E. Big Data in Molecular Research <s> The American Medical Association asked RAND Health to characterize the factors that affect physician professional satisfaction. RAND researchers sought to identify high-priority determinants of professional satisfaction by gathering data from 30 physician practices in six states, using a combination of surveys and semistructured interviews. This article presents the results of the subsequent analysis. <s> BIB003 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> E. Big Data in Molecular Research <s> BACKGROUND ::: Even though it takes up such a large part of all clinicians' working day the medical literature on documentation and its value is sparse. ::: ::: ::: METHODS ::: Medline searches combining the terms medical records, documentation, time, and value or efficacy or benefit yielded only 147 articles. This review is based on the relevant articles selected from this search and additional studies gathered from the personal experience of the authors and their colleagues. ::: ::: ::: RESULTS ::: Documentation now occupies a quarter to half of doctors' time yet much of the information collected is of dubious or unproven value. Most medical records departments still use the traditional paper chart, and there is considerable debate on the benefits of electronic medical records (EMRs). Although EMRs contains a lot more information than a paper record clinicians do not find it easy to getting useful information out of them. Unlike the paper chart narrative is difficult to enter into most EMRs so that they do not adequately communicate the patient's "story" to clinicians. Recent innovations have the potential to address these issues. ::: ::: ::: CONCLUSION ::: Although documentation is widespread throughout the health care industry there has been almost no formal research into its value, on how to enhance its value, or on whether the time spent on it has negative effects on patient care. <s> BIB004 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> E. Big Data in Molecular Research <s> Background: The use of next-generation sequencing has significantly advanced personalized medicine for patients (pts) with breast cancer. Despite this technological advancement, there remains the challenge of understanding how and if tumor heterogeneity can confound molecular analysis and treatment decisions. It has been shown that the expression of ER, PR, and HER2 can vary widely within different areas of the same tumor and between matched primary and metastatic lesions. The "Intensive Trial of OMics in Cancer"-001 (ITOMIC-001; NCT01957514) enrolls pts with metastatic TNBC who are platinum-naive and scheduled to receive cisplatin. Multiple biopsies of up to 7 metastatic sites are performed prior to cisplatin and repeated upon completion of cisplatin and following subsequent therapies. A subset of specimens is chosen for DNA sequencing, RNA sequencing, and quantitative proteomics. We explored the discordance of genomic and proteomic alterations for intrapatient and temporal heterogeneity in pts with TNBC, and the potential benefit of panomic analysis to better inform treatment decisions. Methods: Between 7 and 107 tumor samples/biopsy specimens were obtained from each pt from 1-23 different time points. Blood samples were collected for matched tumor-normal genomic analysis. DNA sequencing data were processed using Contraster; RNASeq data confirmed the presence of gene mutations and was used to identify mutational and transcript abundance. PARADIGM was used to determine associations between gene mutations and signaling pathways. Selected reaction monitoring-mass spectrometry (SRM-MS) was used for proteomics analysis. Results: Almost all pts had loss of TP53 (common in TNBC), and 5 pts had germline BRCA1/2 events, some exhibiting a signature of mutations corresponding to a mismatch repair defect in ≥1 pt. FGFR1/2/3 mutations/amplifications occurred in 5 pts. Three of 12 pts (25%) achieved partial responses after receiving treatments (post cisplatin) based on the molecular profile of their tumor: 1 pt with two FGFR2 activating mutations treated with ponatinib, 1 with a germline BRCA2 mutation treated with veliparib, and 1 with highly expressed Gpnmb treated with an antibody drug conjugate against Gpnmb. Tumor samples showed increased mutational and rearrangement burdens over time but shared mutational characteristics that were unique to each pt. Through the shared alterations across time points for 3 pts, it was possible to reconstruct the clonal history and heterogeneity of the tumors as various therapeutic approaches were attempted. Conclusions: Here we show in TNBC, intrapatient and temporal heterogeneity that may lead to a lack of response to identified targeted therapies. Tumor samples taken over time from the same pt become enriched for more complex genomic structures post therapy but share mutational characteristics, indicating the presence of recurrent tumor populations. This study enabled us to reconstruct the clonal history and heterogeneity of tumors across space (metastatic vs primary at t=0) and time, illustrating the need for comprehensive molecular analysis and combination/multi-targeted therapeutics for optimal treatment in TNBC. Citation Format: Soon-Shiong P, Rabizadeh S, Benz S, Cecchi F, Hembrough T, Mahen E, Burton K, Song C, Senecal F, Schmechel S, Pritchard C, Dorschner M, Blau S, Blau A. Integrating whole exome sequencing data with RNAseq and quantitative proteomics to better inform clinical treatment decisions in patients with metastatic triple negative breast cancer. [abstract]. In: Proceedings of the Thirty-Eighth Annual CTRC-AACR San Antonio Breast Cancer Symposium: 2015 Dec 8-12; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2016;76(4 Suppl):Abstract nr P6-05-08. <s> BIB005
|
The aim is to use the newly gained insight into etiology, pathogenesis, and progression of diseases for novel treatments and prevention. Large international consortia were formed over the last years integrating data from not seldom several hundreds of thousands of individuals to compare genetic and environmental information of healthy individuals with diseased patients. Several of those consortia have built superconsortia merging data and biomaterials of several large-scale consortia together. One example is the OncoArray Network. BIB004 GWAS study, in which more than 400 000 individuals have been genotyped for more than 570 000 genetic variants. Diseases included in this effort are breast cancer, ovarian cancer, colon cancer, lung cancer, and prostate cancer. The GWAS studies examine the correlation between germline gene variations and phenotypic characteristics. Most of those studies explain a certain amount of attributable risk for a disease within a population. For an individual, the statistical effects are rather small and implementation into healthcare is highly dependent on programs which would utilize this information in an epidemiological way, i.e., by selecting patients for individualized prevention or early detection of a disease. This requires tens if not hundreds of thousands or millions individual decisions in a population, which will require highly scalable Big Data technology. For the case of breast cancer, GWAS led to the discovery of around 100 risk genes BIB002 . Biobanks are great sources for molecular research. Biobanks store biological samples (often cancerous tissue) for use in research like genomics and personalized medicine BIB001 . The 1000 Genomes Project [128], launched in 2008, was an effort to sequence the genomes of at least 1000 anonymous participants. Many rare variations were identified, and eight structural-variation classes were analyzed. It is followed by the 100 000 Genomes project, which was launched in 2013. It aims to sequence 100 000 genomes from U.K.'s NHS patients by 2017 and it focuses on patients with rare diseases and more common cancers. BIB003 An interesting and less costly alternative is the distributed collection of genomic data from patients who donate their decentrally analyzed genome to central projects. From a data management perspective, these decentralized approaches require innovative ways of storing and analyzing huge amounts of data employing distributed computing. As stated before, complex diseases involve a number of causes. Unfortunately, to study the interaction of disease causes involving, for example, several gene variations requires even larger sample sizes. Similarly, the study of complex patterns behind the spatiotemporal disease progression requires the acquisition and management of huge data samples BIB005 .
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> F. Digitization Challenges in Precision Medicine <s> There is a growing need for patient-specific and holistic modelling of the heart to support comprehensive disease assessment and intervention planning as well as prediction of therapeutic outcomes. We propose a patient-specific model of the whole human heart, which integrates morphology, dynamics and haemodynamic parameters at the organ level. The modelled cardiac structures are robustly estimated from four-dimensional cardiac computed tomography (CT), including all four chambers and valves as well as the ascending aorta and pulmonary artery. The patient-specific geometry serves as an input to a three-dimensional Navier–Stokes solver that derives realistic haemodynamics, constrained by the local anatomy, along the entire heart cycle. We evaluated our framework with various heart pathologies and the results correlate with relevant literature reports. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> F. Digitization Challenges in Precision Medicine <s> Objective Clinicians’ ability to use and interpret genetic information depends upon how those data are displayed in electronic health records (EHRs). There is a critical need to develop systems to effectively display genetic information in EHRs and augment clinical decision support (CDS). ::: ::: Materials and Methods The National Institutes of Health (NIH)-sponsored Clinical Sequencing Exploratory Research and Electronic Medical Records & Genomics EHR Working Groups conducted a multiphase, iterative process involving working group discussions and 2 surveys in order to determine how genetic and genomic information are currently displayed in EHRs, envision optimal uses for different types of genetic or genomic information, and prioritize areas for EHR improvement. ::: ::: Results There is substantial heterogeneity in how genetic information enters and is documented in EHR systems. Most institutions indicated that genetic information was displayed in multiple locations in their EHRs. Among surveyed institutions, genetic information enters the EHR through multiple laboratory sources and through clinician notes. For laboratory-based data, the source laboratory was the main determinant of the location of genetic information in the EHR. The highest priority recommendation was to address the need to implement CDS mechanisms and content for decision support for medically actionable genetic information. ::: ::: Conclusion Heterogeneity of genetic information flow and importance of source laboratory, rather than clinical content, as a determinant of information representation are major barriers to using genetic information optimally in patient care. Greater effort to develop interoperable systems to receive and consistently display genetic and/or genomic information and alert clinicians to genomic-dependent improvements to clinical care is recommended. <s> BIB002 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> F. Digitization Challenges in Precision Medicine <s> ABSTRACTPrecision medicine aims to combine comprehensive data collected over time about an individual’s genetics, environment, and lifestyle, to advance disease understanding and interception, aid drug discovery, and ensure delivery of appropriate therapies. Considerable public and private resources have been deployed to harness the potential value of big data derived from electronic health records, ‘omics technologies, imaging, and mobile health in advancing these goals. While both technical and sociopolitical challenges in implementation remain, we believe that consolidating these data into comprehensive and coherent bodies will aid in transforming healthcare. Overcoming these challenges will see the effective, efficient, and secure use of big data disrupt the practice of medicine. It will have significant implications for drug discovery and development as well as in the provisioning, utilization and economics of health care delivery going forward; ultimately, it will enhance the quality of care for the... <s> BIB003 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> F. Digitization Challenges in Precision Medicine <s> The ever-increasing volume of scientific discoveries, clinical knowledge, novel diagnostic tools, and treatment options juxtaposed with rising costs in health care challenge physicians to identify, prioritize, and use new information rapidly to deliver efficient and high-quality care to a growing and aging patient population. CancerLinQ, a rapid learning health care system in oncology, is an initiative of the American Society of Clinical Oncology and its Institute for Quality that addresses these challenges by collecting information from the electronic health records of large numbers of patients with cancer. CancerLinQ is first and foremost a quality measurement and reporting system through which oncologists can harness the depth and power of their patients' clinical records and other data to assess, monitor, and improve the care they deliver. However, in light of privacy and security concerns with regard to collection, use, and disclosure of patient information, this article addresses the need to collect protected health information as defined under the Health Insurance Portability and Accountability Act of 1996 to drive rapid learning through CancerLinQ. <s> BIB004
|
Recent publications , estimate that storage needs for molecular data will exceed by far those of Twitter or YouTube, which is of great concern to researchers and healthcare professionals alike. This perception is supported by the many large-scale population-based initiatives (e.g., the aforementioned Genomics England 100K project or the NIH precision medicine initiative) that will collect genomic and other biomedical data from individuals for the next five to ten years. A comprehensive and recent overview of these cohort studies from publicly or private funded entities can be found in BIB003 . The experiences gained from these initiatives will reveal interesting insights and lessons learned about data management of genomic and other "omics" data (e.g., transcrpitomics, proteomics, metabolomoics, epigenomics), emerging standards, and data privacy topics such as informed consent. To consistently improve consistently patient outcome and medical value, it will become very important to bridge the gap between all the previously mentioned "omics" data and clinical outcome. Indeed clinical sequencing in the clinics for advanced patient diagnosis is becoming more and more common, but many questions still remain, e.g., how, where, and what to store from genomic data in the EHR records. Here, important consortia such as Electronic Medical Records and Genomics (emerge) and Clinical Sequencing Exploratory Research (CSER) will hopefully pave the way toward a more integrated view of genomics in the clinic BIB002 . Structuring, organizing, and synchronizing different terminologies across clinical data repositories is the prerequisite to make clinical data meaningful. In that context, companies such as Flatiron Health have developed powerful tools and processes to tackle this data integration challenge and offer structured knowledge bases that can yield new insights into the fight against cancer. BIB001 In many current efforts, data are aggregated across many patients with the goal of developing Clinical Decision Support (CDS) systems. The American Society of Clinical Oncology (ASCO) launched a program named CancerLinQ that envisions to learn not only from trial data but also from the mass of EHR records. A goal is that doctors get support in their decision making by matching their patients' data with outcomes of patients across the United States. Patients gain confidence if their treatment decisions are based on their personal profile and on the shared experiences of similar cancer cases across the country. Finally, researchers can access this massive amount of deidentified health information to generate new hypotheses for research. To make CancerLinQ's vision happen, several different data types and technologies have to be orchestrated ranging from longitudinal patient records, cohort analyses, quality metrics to interactive reporting and text analytics BIB004 . Interoperability between different EHR systems will be another crucial success factor for the CancerLinQ initiative.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> G. Traditional IT Players are Entering Precision Medicine <s> IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> G. Traditional IT Players are Entering Precision Medicine <s> The ever-increasing volume of scientific discoveries, clinical knowledge, novel diagnostic tools, and treatment options juxtaposed with rising costs in health care challenge physicians to identify, prioritize, and use new information rapidly to deliver efficient and high-quality care to a growing and aging patient population. CancerLinQ, a rapid learning health care system in oncology, is an initiative of the American Society of Clinical Oncology and its Institute for Quality that addresses these challenges by collecting information from the electronic health records of large numbers of patients with cancer. CancerLinQ is first and foremost a quality measurement and reporting system through which oncologists can harness the depth and power of their patients' clinical records and other data to assess, monitor, and improve the care they deliver. However, in light of privacy and security concerns with regard to collection, use, and disclosure of patient information, this article addresses the need to collect protected health information as defined under the Health Insurance Portability and Accountability Act of 1996 to drive rapid learning through CancerLinQ. <s> BIB002
|
The outlined data management and analytics challenges in precision medicine are being addressed by a number of established IT companies. Here are some examples. https://connection.asco.org/magazine/features/cancerlinq% E2%84%A2-takes-big-leap-forward 27 https://www.genomeweb.com/informatics/ibm-nygc-expandpartnership-new-pilot-cancer-study https://www.whitehouse.gov/the-press-office/2016/02/25/factsheet-obama-administration-announces-key-actions-accelerate 29 https://www.whitehouse.gov/the-press-office/2016/02/25/factsheet-obama-administration-announces-key-actions-accelerate SAP has teamed up with the American Society of Clinical Oncology (ASCO) to implement CancerLinQ , BIB002 . SAP's in-memory technology platform SAP HANA will play a crucial role in providing the infrastructure and algorithms to analyze the vast amounts of diverse data to provide clinical decision support. IBM with its Watson technology , BIB001 has recently started a collaboration with the New York Genome Center (NYGC) to generate and analyze the exome, complete genome data, and epigenetic data linked to clinical outcomes from participating patients. The partners plan to generate an open knowledge base using the generated data. Dell is partnering with the Translational Genomics Research Institute (TGen) to tackle pediatric cancer in Europe and in the Middle East. In addition, Dell recently announced that its Cloud Clinical Archive-currently storing over 11 billion medical images and around 159 million clinical studies from multiple healthcare providers-will support storage and management of genomics data. The long-term goal will be to combine medical imaging diagnosis with advanced genomics to impact patient care. Intel is also looking into the precision medicine space. Saffron, a cognitive computing company that Intel acquired in 2015, is studying how users can gain additional insights from above mentioned Dell's Cloud Clinical Archive. The company is also offering NLP capabilities, and the platform can be compared to IBM Watson's offering. In addition, within the context of Barack Obama's Precision Medicine Initiative, Intel launched a Precision Medicine Acceleration Program. Microsoft also supports the U.S. Government's Precision Medicine Initiative by hosting genomic data sets in Microsoft's Azure cloud platform by the end of 2016 free of charge. Amazon Web Services (AWS) is offering HIPAA-compliant cloud storage and data security. Therefore, AWS often functions as a backbone of genomics data management platforms, and several companies such as Seven Bridges or DNAnexus rely on the AWS technology. As a concrete example, the Cancer Genomics Cloud (CGC) which includes the well-known "The Cancer Genome Atlas" (TCGA) is operated by Seven Bridges and runs on the AWS cloud. Alphabet Inc. is investing heavily in precision medicine. This happens mainly either through the many investments taken by Google Ventures or by own research and development activities from subsidiaries such as Verily or Calico. Investments in companies related to precision medicine from Google Ventures include Flatiron Health, Foundation Medicine, and DNAnexus among others. Among Google's initiatives are, e.g., Google Genomics or the Google Baseline Study. Google Genomics is Google's HIPAA-compliant cloud platform for storing and managing genomics data. Besides offering access to publicly available data sets such as the TCGA, customers can load their own genomic data sets and run analyses on the data through the offered API. The Google baseline study aims to collect different types of data such as molecular, imaging, clinical, and data related to patient engagement to understand patterns that are typical for healthy individuals. All these efforts illustrate that information technology is moving quickly into personalized healthcare and therefore will be a main enabler to realize the goals of precision medicine. The crucial challenge is to turn these vast amounts of data into knowledge and insights.
|
Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> H. A View to the Future: a Truly " n = 1 "-Medicine <s> First-generation, E1-deleted adenovirus subtype 5 (Ad5)-based vectors, although promising platforms for use as cancer vaccines, are impeded in activity by naturally occurring or induced Ad-specific neutralizing antibodies. Ad5-based vectors with deletions of the E1 and the E2b regions (Ad5 [E1-, E2b-]), the latter encoding the DNA polymerase and the pre-terminal protein, by virtue of diminished late phase viral protein expression, were hypothesized to avoid immunological clearance and induce more potent immune responses against the encoded tumor antigen transgene in Ad-immune hosts. Indeed, multiple homologous immunizations with Ad5 [E1-, E2b-]-CEA(6D), encoding the tumor antigen carcinoembryonic antigen (CEA), induced CEA-specific cell-mediated immune (CMI) responses with antitumor activity in mice despite the presence of preexisting or induced Ad5-neutralizing antibody. In the present phase I/II study, cohorts of patients with advanced colorectal cancer were immunized with escalating doses of Ad5 [E1-, E2b-]-CEA(6D). CEA-specific CMI responses were observed despite the presence of preexisting Ad5 immunity in a majority (61.3 %) of patients. Importantly, there was minimal toxicity, and overall patient survival (48 % at 12 months) was similar regardless of preexisting Ad5 neutralizing antibody titers. The results demonstrate that, in cancer patients, the novel Ad5 [E1-, E2b-] gene delivery platform generates significant CMI responses to the tumor antigen CEA in the setting of both naturally acquired and immunization-induced Ad5-specific immunity. <s> BIB001 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> H. A View to the Future: a Truly " n = 1 "-Medicine <s> The search for specificity in cancers has been a holy grail in cancer immunology. Cancer geneticists have long known that cancers harbor transforming and other mutations. Immunologists have long known that inbred mice can be immunized against syngeneic cancers, indicating the existence of cancer-specific antigens. With the technological advances in high-throughput DNA sequencing and bioinformatics, the genetic and immunologic lines of inquiry are now converging to provide definitive evidence that human cancers are vastly different from normal tissues at the genetic level, and that some of these differences are recognized by the immune system. The very vastness of genetic changes in cancers now raises different question. Which of the many cancer-specific genetic (genomic) changes are actually recognized by the immune system, and why? New observations are now beginning to probe these vital issues with unprecedented resolution and are informing a new generation of studies in human cancer immunotherapy. Cancer Immunol Res; 3(9); 969–77. ©2015 AACR . <s> BIB002 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> H. A View to the Future: a Truly " n = 1 "-Medicine <s> Massively parallel sequencing approaches are beginning to be used clinically to characterize individual patient tumors and to select therapies based on the identified mutations. A major question in these analyses is the extent to which these methods identify clinically actionable alterations and whether the examination of the tumor tissue alone is sufficient or whether matched normal DNA should also be analyzed to accurately identify tumor-specific (somatic) alterations. To address these issues, we comprehensively evaluated 815 tumor-normal paired samples from patients of 15 tumor types. We identified genomic alterations using next-generation sequencing of whole exomes or 111 targeted genes that were validated with sensitivities >95% and >99%, respectively, and specificities >99.99%. These analyses revealed an average of 140 and 4.3 somatic mutations per exome and targeted analysis, respectively. More than 75% of cases had somatic alterations in genes associated with known therapies or current clinical trials. Analyses of matched normal DNA identified germline alterations in cancer-predisposing genes in 3% of patients with apparently sporadic cancers. In contrast, a tumor-only sequencing approach could not definitively identify germline changes in cancer-predisposing genes and led to additional false-positive findings comprising 31% and 65% of alterations identified in targeted and exome analyses, respectively, including in potentially actionable genes. These data suggest that matched tumor-normal sequencing analyses are essential for precise identification and interpretation of somatic and germline alterations and have important implications for the diagnostic and therapeutic management of cancer patients. <s> BIB003 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> H. A View to the Future: a Truly " n = 1 "-Medicine <s> A phase 1/2 clinical trial evaluating dosing, safety, immunogenicity, and overall survival on metastatic colorectal cancer (mCRC) patients after immunotherapy with an advanced-generation Ad5 [E1-, E2b-]-CEA(6D) vaccine was performed. We report our extended observations on long-term overall survival and further immune analyses on a subset of treated patients including assessment of cytolytic T cell responses, T regulatory (Treg) to T effector (Teff) cell ratios, flow cytometry on peripheral blood mononuclear cells (PBMCs), and determination of HLA-A2 status. An overall survival of 20 % (median survival 11 months) was observed during long-term follow-up, and no long-term adverse effects were reported. Cytolytic T cell responses increased after immunizations, and cell-mediated immune (CMI) responses were induced whether or not patients were HLA-A2 positive or Ad5 immune. PBMC samples from a small subset of patients were available for follow-up immune analyses. It was observed that the levels of carcinoembryonic antigen (CEA)-specific CMI activity decreased from their peak values during follow-up in five patients analyzed. Preliminary results revealed that activated CD4+ and CD8+ T cells were detected in a post-immunization sample exhibiting high CMI activity. Treg to Teff cell ratios were assessed, and samples from three of five patients exhibited a decrease in Treg to Teff cell ratio during the treatment protocol. Based upon the favorable safety and immunogenicity data obtained, we plan to perform an extensive immunologic and survival analysis on mCRC patients to be enrolled in a randomized/controlled clinical trial that investigates Ad5 [E1-, E2b-]-CEA(6D) as a single agent with booster immunizations. <s> BIB004 </s> Going Digital: A Survey on Digitalization and Large-Scale Data Analytics in Healthcare <s> H. A View to the Future: a Truly " n = 1 "-Medicine <s> Somatic mutations binding to the patient9s MHC and recognized by autologous T cells (neoepitopes) are ideal cancer vaccine targets. They combine a favorable safety profile due to a lack of expression in healthy tissues with a high likelihood of immunogenicity, as T cells recognizing neoepitopes are not shaped by central immune tolerance. Proteins mutated in cancer (neoantigens) shared by patients have been explored as vaccine targets for many years. Shared (“public”) mutations, however, are rare, as the vast majority of cancer mutations in a given tumor are unique for the individual patient. Recently, the novel concept of truly individualized cancer vaccination emerged, which exploits the vast source of patient-specific “private” mutations. Concurrence of scientific advances and technological breakthroughs enables the rapid, cost-efficient, and comprehensive mapping of the “mutanome,” which is the entirety of somatic mutations in an individual tumor, and the rational selection of neoepitopes. How to transform tumor mutanome data to actionable knowledge for tailoring individualized vaccines “on demand” has become a novel research field with paradigm-shifting potential. This review gives an overview with particular focus on the clinical development of such vaccines. Clin Cancer Res; 22(8); 1885–96. ©2016 AACR . See all articles in this CCR Focus section, “Opportunities and Challenges in Cancer Immunotherapy.” <s> BIB005
|
Dramatic improvements in the quality and speed of genomic sequencing and analysis as a clinical diagnostic tool for individual patients combined with the innovations propelling immuno-oncology are paving a new era for truly personalizing the treatment of cancer. At the heart of this new hope is the newfound ability to rapidly identify and target tumor cells with specific DNA mutations unique to each cancer patient. The products of mutated genes encoding altered proteins are so-called "neoepitopes" which serve as the molecular address to direct and redirect immune cells for killing and to procure long-term immunity. Neoepitopes are defined as unique genetic alterations, which result in unique novel proteins. They are found specifically in a patient's tumor (but not normal tissue) which can be targeted by the immune system to attack the tumor with minimal off-target toxicity. Also, it is highly unlikely that the same neoepitopes occur in other patients, and if so only in small groups of patients. Therefore, a possible treatment of neoepitopes with medicine that is manufactured in real time is a vision of real-life " n = 1 " medicine BIB005 , BIB002 . Identifying neoepitopes for each patient is made possible by high-throughput whole genome or exome sequencing and by the direct comparison of abnormal tumor DNA with each patient's own normal DNA. The former widens the search for drugable targets (neoepitopes) in the >99% of the genome deemed untargetable or unimportant by panel sequencing. The latter reduces the significantly high false positive error rates associated with tumor-only sequencing techniques BIB003 . Precision in individualizing treatments targeting neoepitopes further requires confirmation of the expression of mutated genes, thus avoiding another potential pitfall of false positive errors, and the potential for the altered protein to induce immunogenicity. If a tumor is found to express neoepitopes, which are unique for the tumor, they can serve as a "molecular address" for the immune system. Therefore, there is a good rationale that the neoepitope can be delivered to the immune system by an immunogenic vehicle such as a vaccinating virus. One such vehicle is the adenovirus which may be engineered to express within its DNA many neoepitopes, and upon injection, can locally infect dendritic cells (as part of the immune system) which then present an identified neoepitope to the immune effector cells and trigger an immune response against the tumor cells. Despite great promise, the use of adenovirus or any other foreign delivery vehicles remains hindered due to the preexistence or the induction of neutralizing antibodies against them by the patient's immune system. This limitation has been overcome by engineered adenoviruses which are capable of safely vaccinating and revaccinating against hundreds of neoepitopes and tumorassociated antigens despite preexisting immunity against adenovirus BIB001 . Remarkable results have thus far been published demonstrating the delivery of tumor associated antigens by this engineered adenovirus in a cohort of latestage colorectal cancer patients BIB004 . A more recent development has been the engineering and application of immune cells (T-cells and NK-cells) that express antibodies on their surface as part of a chimeric antigen receptor (CAR) for direct targeting of tumor cells expressing their cognate antigens. One particular approach, an off-the-shelf human NK cell line dubbed NK-92, is engineerable to produce innumerable CARs. These cells are now being engineered to produce CARs (dubbed taNKs) targeting neoepitopes discovered to be expressed by individual cancer patients' tumor cells, thus enabling a novel, truly personalized immunotherapeutic approach to fight cancer. For this and many other reasons, the discovery of neoepitopes has the potential to be a watershed moment in the war against cancer. These examples show that the implementation and utilization of the immune system requires yet another layer of data, leading to a true " n = 1 " medicine. One of the challenges with neoepitope discovery and targeting will be the management of big data: teraFLOPS of compute resources in a cloud environment are required to generate terabytes of sequencing data, including whole genome and/or whole exome sequencing, RNA sequencing, and molecular modeling of immune presentation of neoepitopes. Meeting the demands of heterogeneity, analysis and long-term storage of data from multiple biopsies for each patient are further challenges. These activities require compute and storage under HIPAA, as well as high-speed and large-bandwidth connectivity for transiting sequence data from sequencing labs to supercompute/cloud environments rapidly, such that derivation and delivery of neoepitope targeting platforms are enabled in actionable time for each patient. These challenges require significant infrastructure and resources, which are already realized by some private, Big Data supercompute clouds interconnected by dedicated fiber infrastructure capable of transporting terabytes of data at terabits per second. Such infrastructures had originally been developed for financial trading markets, but are now retrofitted to meet the needs of sequencing analysis and neoepitope discovery.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.