reference
stringlengths
141
444k
target
stringlengths
31
68k
Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VII. CHALLENGES AND PROSPECTS <s> A 300 GHz transmission system, designed for terahertz communication channel modelling and propagation studies, is introduced. It consists of an autarkic transmitter and detector units based on Schottky diode mixer technology. The system performance is characterised with regard to link budget and noise. For demonstration, analogue video signals have been transmitted over distances of up to 22 m. <s> BIB001 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VII. CHALLENGES AND PROSPECTS <s> The performance of large multi-core chips have been limited by high power dissipation and latency problems associated with conventional interconnect topologies. On-chip wireless interconnect architectures with associated on-chip antennas have been recently investigated as a possible alternative to wired interconnects. In this paper, we propose the design for an on-chip planar log-periodic antenna with wide bandwidth and end-fire directivity at millimeter and/or microwave range of frequencies to improve the signal transmission characteristics of wireless interconnects. A two-port antenna network is simulated for 60GHz in HFSS and the radiation pattern and the scattering matrix parameters are presented. <s> BIB002 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VII. CHALLENGES AND PROSPECTS <s> SoC (System on chip) technology has rapidly developed in recent years, stimulating emerging research areas such as investigating the efficacy of wireless network interconnection within a single chip or between multiple chips. However the design of the on-chip antenna faces the challenge of obtaining high radiation efficiency and transmission gain due to conductive loss of the silicon substrate. A new on-chip propagation mechanism of radio waves, which takes advantage of the un-doped silicon layer, is developed in order to overcome this challenge. It was found that by properly designing the dimension of silicon wafer, the un-doped silicon layer is able to act like a waveguide. Most of the energy is directed to the approximately lossless the undoped silicon layer of high resistivity instead of attenuating in the doped silicon substrate or radiating to the air. HFSS modeling and simulation results are provided to show that efficiency, gain and directivity of the on-chip antenna are greatly improved. In addition, this type of antennas can be easily reconfigured, which as a result, makes wireless SoCs with wireless interconnects or even a wireless network on PCB (Printed Circuit Board) possible. <s> BIB003 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VII. CHALLENGES AND PROSPECTS <s> This letter demonstrates the feasibility of applying I/O pins as chip Tx/Rx antennas for implementing wireless inter/intra-chip communications (WIICs). An innovative printed circuit board (PCB) medium is presented as a signal propagation channel, which is specially bounded by a metamaterial electromagnetic wave absorber to improve electromagnetic environment pollution. Presented is a 20.4-GHz WIIC communication system, mainly including a transmitter and a receiver. The bit-error-rate (BER) performance of a coherent binary phase-shift keying interconnect system is evaluated. It is shown that the system performance degrades as the separation distance of the transceivers increases. A data rate of 1 Gb/s with a BER at the level of 10 -5 on the PCB investigated is achieved for the transmitted power of 10 dBm. <s> BIB004 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VII. CHALLENGES AND PROSPECTS <s> We propose a novel antenna design enabled by 3-D printing technology for future wireless intrachip interconnects aiming at applications of multicore architectures and system-on-chips. In our proposed design we use vertical quarter-wavelength monopoles at 160 GHz on a ground plane to avoid low antenna radiation efficiency caused by the silicon substrate. The monopoles are surrounded by a specially designed dielectric property distribution. This additional degree of freedom in design enabled by 3-D printing technology is used to tailor the electromagnetic wave propagation. As a result, the desired wireless link gain is enhanced and the undesired spatial crosstalk is reduced. Simulation results show that the proposed dielectric loading approach improves the desired link gain by 8–15 dB and reduces the crosstalk by 9–23 dB from 155 to 165 GHz. As a proof-of-concept, a 60 GHz prototype is designed, fabricated, and characterized. Our measurement results match the simulation results and demonstrate 10–18 dB improvement of the desired link gain and 10–30 dB reduction in the crosstalk from 55 to 61 GHz. The demonstrated transmission loss of the desired link at a distance of 17 mm is only 15 dB, which is over 10 dB better than the previously reported work. <s> BIB005 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VII. CHALLENGES AND PROSPECTS <s> A recently developed computational imaging technique, X-ray ptychographic tomography, is used to study integrated circuits, and a 3D image of a processor chip with a resolution of 14.6 nm is obtained. <s> BIB006 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VII. CHALLENGES AND PROSPECTS <s> Ubiquitous multicore processors nowadays rely on an integrated packet-switched network for cores to exchange and share data. The performance of these intra-chip networks is a key determinant of the processor speed and, at high core counts, becomes an important bottleneck due to scalability issues. To address this, several works propose the use of mm-wave wireless interconnects for intra-chip communication and demonstrate that, thanks to their low-latency broadcast and system-level flexibility, this new paradigm could break the scalability barriers of current multicore architectures. However, these same works assume 10+ Gb/s speeds and efficiencies close to 1 pJ/bit without a proper understanding on the wireless intra-chip channel. This paper first demonstrates that such assumptions do not hold in the context of commercial chips by evaluating losses and dispersion in them. Then, we leverage the system's monolithic nature to engineer the channel, this is, to optimize its frequency response by carefully choosing the chip package dimensions. Finally, we exploit the static nature of the channel to adapt to it, pushing efficiency-speed limits with simple tweaks at the physical layer. Our methods reduce the path loss and delay spread of a simulated commercial chip by 47 dB and 7.3x, respectively, enabling intra-chip wireless communications over 10 Gb/s and only 3.1 dB away from the dispersion-free case. <s> BIB007 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VII. CHALLENGES AND PROSPECTS <s> Here, I present recent device and system results on monolithic electronic-photonic platforms developed in partially-depleted SOI CMOS, in which photonic functions are implemented with 'zero change' to the fabrication process, and solely by way of design. This platform enables the integration of photonic components, analog and digital circuits, all on a single chip, to achieve the performance and scalability needed for optical interconnects with Terabits per second data rates for high performance computing and data center applications. The resonance-based transmitters and receivers enabled by on-chip mixed-signal resonance stabilization circuits, along with very small electrical parasitics offer high bandwidth densities and sub-pJ/bit on-chip link energy consumptions to achieve Tb/s-scale optical interconnects through WDM systems. <s> BIB008 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VII. CHALLENGES AND PROSPECTS <s> Optical wireless (OW) links have been recently proposed as an interconnection technology for multiple processing cores operating in parallel on the same chip. OW communication is also a mature option for indoor and outdoor applications. Design and analysis of networks with optical wireless links require a careful investigation of cross-link interference which plays a key-role on the performance and efficiency of systems that reuse the same channel for multiple parallel transmissions. In this paper we analyze the bit-error rate performance of OW links for on-chip applications with cross-link cochannel interference. As a novelty with respect to known literature on crosstalk in optical communications we consider asynchronous data transmission and address the system performance in case of heavy interference. Analytical methods are used to derive error probabilities as a function of signal-to-noise ratio (SNR), crosstalk power ratio, detection threshold, pulse shaping. Both exact and tight approximation methods are considered. As shown in the results, robustness against interference increases with asynchronous transmission, RZ pulse shaping and suitable design of detection threshold. It is also shown how the proposed analysis can be used to evaluate the reuse distance between two parallel links simultaneously transmitting in the same direction. <s> BIB009 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VII. CHALLENGES AND PROSPECTS <s> Plasmonic nanoantennas integrated with silicon waveguides are a suitable solution for the implementation of on-chip wireless communications at optical frequencies. The use of optical wireless links simplifies on chip network design, mitigating switching and routing issues, while avoiding electro-optical conversion. In this work, we investigate the performance of multiple parallel on-chip optical interconnections by taking into account cross-link interference, which arises when the links reuse the same optical frequency. This analysis combines two approaches: FDTD simulation to evaluate both the radiation diagram of the antennas used in the optical links and the near-field coupling between transmit and receive neighbor antennas, and system-analysis to evaluate interference effects on link error probability. The results obtained will enable us to design the distances among parallel interconnections in order to preserve acceptable bit error probability. <s> BIB010 </s> Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics <s> VII. CHALLENGES AND PROSPECTS <s> Wireless Network-on-Chip (WNoC) has emerged as a promising alternative to conventional interconnect fabrics at the chip scale. Since WNoCs may imply the close integration of antennas, one of the salient challenges in this scenario is the management of coupling and interferences. This paper, instead of combating coupling, aims to take advantage of close integration to create arrays within a WNoC. The proposed solution is opportunistic as it attempts to exploit the existing infrastructure to build a simple reconfigurable beamforming scheme. Full-wave simulations show that, despite the effects of lossy silicon and nearby antennas, within-package arrays achieve moderate gains and beamwidths below 90°, a figure which is already relevant in the multiprocessor context. <s> BIB011
In this section, we discuss the main challenges and prospects relative to the channel characterization of wireless chip-scale networks. The first outstanding challenge refers to the availability of affordable methods for the probing of chip-scale environments to measure the chip-scale channels. Experimental results are limited to the lower part of the spectrum and assuming planar antennas in open chip configurations that are easier to manufacture and access, with few notable exceptions BIB005 . Testbeds for the measurement of THz and optics channels or within flip-chip packages are not publicly available at the time of this writing, thus rendering the existing simulation campaigns incomplete. At the mmWave bands, one of the main impairments for communication is the presence of a thick layer of lossy silicon. Unacceptable attenuation over 50 dB has been measured even for distances of a few centimeters. To address this issue, the research community will need to leverage the monolithic nature of the system beyond existing works that optimize the package dimensions as functions of path loss or delay spread objectives BIB007 , introduce additional layers of undoped silicon BIB003 , metasurface-like absorbers BIB004 , or directly adds a custom dedicated channel for wireless propagation . Another way to combat losses is through increasing the directivity, even if that means loss of the broadcast advantage. In this respect, antenna arrays are prohibitive due to the relatively large wavelength, unless opportunistic solutions are found BIB011 . Compact directive antennas such as planar log-periodic antennas have been also proposed BIB002 , but also with limited improvement in directivity. Related to this aspect, the modeling of the interference between directional antennas remains relatively unexplored in the mmWave band. As we approach the THz band, channel characterization via measurements becomes challenging due to the scarcity of dedicated testbeds for THz wireless communications. While the field has largely evolved since the very first THz communication testbed at 300 GHz BIB001 to the state-of-theart platforms at 1 THz [108], the latest being now able to transmit user-defined bits in custom frame structures with a myriad of modulations over a 40 GHz bandwidth, the overall size of transmitter and the receiver make on-chip channel characterization very challenging. Even though test chips and equipment of THz imaging can be partially re-used for wireless channel sounding, the usable bandwidth in this case is relatively small. More critically, by considering the size of a THz spectroscopy platform, the sub-millimeter wavelength and the propagation distance in the chip environment, near-field effects become rather challenging to be measured and separated from antenna responses. Furthermore, as the effective area of the antenna diminishes, finding THz signal over the noise floor becomes difficult especially in a challenging environment such as the chip. In light of these issues, computation methods are required. However, since the chip scale is relatively large in terms of THz wavelengths, full-wave solving can be very computationally intensive, thereby highlighting the need for robust ray tracing methods that account for the particularities of the scenario. One problem that can be worth exploring, for instance, is the potential leakage of THz signals through the space between solder balls that connect the chips with the outer world. This could be seen as not only a transmission efficiency loss, but also a security threat as nearby attackers could eavesdrop or introduce signals. At optical frequencies, measurements of the transmission channel may be possible thanks to recent advances in the fabrication and integration of optical devices with light sources at the chip scale BIB008 . Although these works target optical NoC based on integrated waveguides and ring resonators, the coupling of such devices with optical antennas is feasible. Beyond the feasibility of experimental setups, however, the modeling of the environment is an open challenge in the characterization of the optical wireless channel. The antennas are located within the insulator, close to the rest of metallization layers that make up the processor circuits. Even assuming clear line of sight between the antennas and directional radiation, part of the antenna energy will be beamed to slightly tilted directions and impact on the metallization layers. At mmWave/THz frequencies, the radiation wavelength is larger than the pitch of the metallization layers and, thus, they could be modeled as solid blocking elements. At optical frequencies, though, the radiation wavelength is commensurate to the pitch of the metallization layers and, therefore, scattering/diffraction phenomena are bound to occur. The main challenge here is how to model the maze formed by the wires routed through the different layers. Post-layout simulations of the processor chip, if such information is available, or X-ray imaging of actual fabricated chips BIB006 can provide inputs for such model. Although this simulation method does not scale well due to the computational cost of recreating such an environment at a micrometer granularity, the results at very short distances can be useful when incorporated to interference models that currently do not evaluate the scattering/diffraction phenomena BIB009 , BIB010 .
A Survey of Agent-Based Modeling of Hospital Environments <s> I. INTRODUCTION <s> Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed. <s> BIB001 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> I. INTRODUCTION <s> Computer simulation methods have enjoyed widespread use in healthcare system investigation and improvement. Most reported applications use discrete event simulation, though there are also many reports of the use of system dynamics. There are few reports of the use of agent-based simulations (ABS). This is curious, because healthcare systems are based on human interactions and the ability of ABS to represent human intention and interaction makes it an appealing approach. Tools exist to support both conceptual modelling and model implementation in ABS and these are illustrated with a simple example from an emergency department. <s> BIB002 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> I. INTRODUCTION <s> Complex systems abound in public health. Complex systems are made up of heterogeneous elements that interact with one another, have emergent properties that are not explained by understanding the individual elements of the system, persist over time, and adapt to changing circumstances. Public health is starting to use results from systems science studies to shape practice and policy, for example in preparing for global pandemics. However, systems science study designs and analytic methods remain underutilized and are not widely featured in public health curricula or training. In this review we present an argument for the utility of systems science methods in public health, introduce three important systems science methods (system dynamics, network analysis, and agent-based modeling), and provide three case studies in which these methods have been used to answer important public health science questions in the areas of infectious disease, tobacco control, and obesity. <s> BIB003
This paper surveys the application of agent based modeling (ABM) and simulation of complex social dynamics within the institutional scale of a hospital. Hospitals are a promising area of continued ABM research with the concomitant potential for substantive outcomes. Healthcare around the world deals with a perennial pressure to find cost efficiencies, and target areas include the optimization of healthcare processes and flow, reducing emergency department (ED) wait times and length of stay, and reduce admission times. Within these areas, hospitals rely on the experience of practitioners for improvements in triage procedures, diverting low-acuity patients, reconfiguring the healthcare staffing model, and reorganizing operational units both physically and procedurally. Simulation offers a potential to identify improvements and new understandings in how a facility operates. Simulation has the potential to models real-world variability, lessens testing and implementation costs of planned changes, and helps to minimizes the risk of errors in implementing changes. Patient tracking through their stay in the hospital by using technologies such as radio frequency identification (RFID) and improved electronic reporting and dashboards are one example of initiatives that can be integrated with simulation studies to generate valuable information on social dynamics within the institution. In general agent based modeling is 'bottom-up' systems modeling from the perspective of constituent parts. Systems studied are modelled as a collection of agents (in social systems, most often people) imbued with properties: characteristics, behaviours (actions), and interactions that attempt to capture actual properties of individuals with a high degree of diversity and fidelity. In the most general context, agents are both adaptive and autonomous entities who are able to assess their situation, make decisions, compete or cooperate with one another on the basis of a set of rules, and adapt future behaviours on the basis of past interactions. Agent properties are determined by the modeler and are ideally derived from actual data that reasonably describe agents' behaviours -i.e. their movements and their interactions with other agents. The emergence of a data culture, also called 'big data' and associated 'big data analytics', offers new opportunities to use real world data, even in near real time, as inputs into ABMs. The modeler's task is to determine which data sources best govern agent profiles in a given ABM institutional simulation. The foundational premise and the conceptual depth of ABM is that simple rules of individual behaviour will aggregate to illuminate complex and/or emergent group-level phenomena that are not specifically encoded by the modeler and that cannot be predicted or explained by the agent-level rules. In essence, ABM has the potential to reveal a whole that is greater than the sum of its parts BIB001 , . ABMs provide a natural description of a system that can be calibrated and validated by representative expert agents, and is flexible enough to be tuned to high degrees of sensitivity in agent behaviours and interactions. As such, they play a vital role as an information translation vehicle. The lexicon used to develop an ABM is the lexicon of area experts and of the institution under consideration (e.g. a hospital), reflecting the world in a real and specific a manner as possible. In essence, one builds a laboratory where the behaviours of individuals are similar to those in the real-world emergency department and the one observes what happens when the rules of behaviour and interactions are changed. The underlying ABM engine may be quite complex and utilize the most advanced processing and hardware techniques available, but this level of detail is not required in developing the model or in the analysis of its output. Although simulation and modeling in healthcare facilities is not new, agent based modeling within these settings is a relative newcomer. This survey paper focusses on hospital ABMs, which is an agent centric approach as opposed to more established areas of simulation which tend to the process oriented. The key differences between modeling techniques such as discrete event, system dynamics, network analysis, and ABM are well-documented and to date, the majority of research in healthcare simulation has utilized Monte Carlo, discrete event simulation (DES), and system dynamics rather than ABMs - BIB003 . Yet, ABMs are considered to be a very promising and complementary technique by which to simulate hospital dynamics, with arguments for their more widespread use within healthcare will depend on more widely adopted and more effective conceptualization and implementation tools BIB002 . Some researchers claim that the ''signature'' success of ABMs in public health is in the study of epidemics and infectious disease dynamics BIB003 , , where the successes of ABMs have demonstrated the importance of the role of social networks, human movement patterns, transportation systems, and the disease dynamic itself. This overwhelming amount of research in applying ABMs to the study of large scale infectious disease spread (e.g. influenza, STIs) is not addressed here. ABMs applied to institution-scale environments (rather than regional scales) are nonetheless emerging as an excellent vehicle for modeling hospitals due to their inherent ability to leverage social network analysis in a similar manner to social interactions of a large scale infectious disease. The remainder of this paper is organized as follows. Section II surveys the application of ABMs to hospital and similar institutional settings. Section III discusses data sources that may be useful in extending the models more fully. Section IV provides reference examples that encompasses many of the phenotypes of a typical hospital centric ABM. Section V provides a summary.
A Survey of Agent-Based Modeling of Hospital Environments <s> II. ABMS WITHIN HOSPITALS <s> Background and objectives: Infectious diseases and antimicrobial-resistant microorganisms are a growing problem for the dialysis population. The frequency of patient visits and intimate, prolonged physical contact with the inanimate environment during dialysis treatments make these facilities potentially efficient venues for nosocomial pathogen transmission. Isolation measures and infection control practices can be inconvenient and consume limited resources. Quantitative tools for analyzing the effects of different containment strategies can help to identify optimal strategies for further study. However, spatial and temporal considerations germane to the dialysis unit greatly complicate analyses relying on conventional mathematical approaches. Design, setting, participants, & measurements: A stochastic, individual-based, Monte Carlo simulation tool that predicts the effects of various infection control strategies on pathogen dissemination through the dialysis unit in the face of diagnostic uncertainty was developed. The model was configured to emulate a medium-sized dialysis unit. The predicted consequences of various policies for scheduling patients who were suspected of being infectious were then explored, using literature-based estimates of pathogen transmissibility, prevalence, and diagnostic uncertainty. Results: Environmental decontamination was predicted to be of paramount importance in limiting pathogen dissemination. Temporal segregation (scheduling patients who were suspected of being infectious to dialysis shifts that are later in the day) was predicted to have the greatest effectiveness in reducing transmission, given adequate environmental decontamination between successive days. Conclusions: Decontamination of the patient’s environment (chair) can markedly attenuate pathogen dissemination. Temporal segregation could be a simple, low-cost, system-level intervention with significant potential to reduce nosocomial transmission in the dialysis unit. <s> BIB001 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> II. ABMS WITHIN HOSPITALS <s> Emergency department overcrowding is a problem that threatens the public health of communities and compromises the quality of care given to individual patients. The Institute of Medicine recommends that hospitals employ information technology and operations research methods to reduce overcrowding. This paper describes the development of an agent based simulation tool that has been designed to evaluate the impact of various physician staffing configurations on patient waiting times in the emergency department. We evaluate the feasibility of this tool at a single hospital emergency department. <s> BIB002 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> II. ABMS WITHIN HOSPITALS <s> The objective of this paper was to develop an agent based modeling framework in order to simulate the spread of influenza virus infection on a layout based on a representative hospital emergency department in Winnipeg, Canada. In doing so, the study complements mathematical modeling techniques for disease spread, as well as modeling applications focused on the spread of antibiotic-resistant nosocomial infections in hospitals. Twenty different emergency department scenarios were simulated, with further simulation of four infection control strategies. The agent based modeling approach represents systems modeling, in which the emergency department was modeled as a collection of agents (patients and healthcare workers) and their individual characteristics, behaviors, and interactions. The framework was coded in C + + using Qt4 libraries running under the Linux operating system. A simple ordinary least squares (OLS) regression was used to analyze the data, in which the percentage of patients that be came infected in one day within the simulation was the dependent variable. The results suggest that within the given instance con text, patient-oriented infection control policies (alternate treatment streams, masking symptomatic patients) tend to have a larger effect than policies that target healthcare workers. The agent-based modeling framework is a flexible tool that can be made to reflect any given environment; it is also a decision support tool for practitioners and policymakers to assess the relative impact of infection control strategies. The framework illuminates scenarios worthy of further investigation, as well as counterintuitive findings. <s> BIB003
Agent Based Modeling has seen a tremendous growth in many areas over the past 15 years and more recently one of these areas being hospital and healthcare settings. The primary application of ABMs to hospital environments examine patient flow (e.g. emergency departments) BIB002 and other hospital operational issues, and using ABM to examine the dynamics of infection spread within a hospital (e.g. the hospital's role in an influenza epidemic BIB003 and the dynamics of nosocomial infection spread BIB001 ). ABMs in healthcare have also examined economic models of healthcare, removed from scale of the patient itself; these models are not surveyed here.
A Survey of Agent-Based Modeling of Hospital Environments <s> A. SYSTEM ATTRIBUTES <s> We developed a model of pathogen dissemination in the outpatient clinic that incorporates key kinetic aspects of the transmission process, as well as uncertainty regarding whether or not each incident patient is contagious. Assigning appointments late in the day to patients suspected of being infectious should decrease pathogen dissemination. <s> BIB001 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> A. SYSTEM ATTRIBUTES <s> Computer simulation methods have enjoyed widespread use in healthcare system investigation and improvement. Most reported applications use discrete event simulation, though there are also many reports of the use of system dynamics. There are few reports of the use of agent-based simulations (ABS). This is curious, because healthcare systems are based on human interactions and the ability of ABS to represent human intention and interaction makes it an appealing approach. Tools exist to support both conceptual modelling and model implementation in ABS and these are illustrated with a simple example from an emergency department. <s> BIB002 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> A. SYSTEM ATTRIBUTES <s> Agent-based modeling can illuminate how complex marketing phenomena emerge from simple decision rules. Marketing phenomena that are too complex for conventional analytical or empirical approaches can often be modeled using this approach. Agent-based modeling investigates aggregate phenomena by simulating the behavior of individual “agents,” such as consumers or organizations. Some useful examples of agent-based modeling have been published in marketing journals, but widespread acceptance of the agent-based modeling method and publication of this method in the highest-level marketing journals have been slowed by the lack of widely accepted standards of how to do agent-based modeling rigorously. We address this need by proposing guidelines for rigorous agent-based modeling. We demonstrate these guidelines, and the value of agent-based modeling for marketing research, through the use of an example. We use an agent-based modeling approach to replicate the Bass model of the diffusion of innovations, illustrating the use of the proposed guidelines to ensure the rigor of the analysis. We also show how extensions of the Bass model that would be difficult to carry out using traditional marketing research techniques are possible to implement using a rigorous agent-based approach. <s> BIB003
When designing an ABM for hospital applications, there are choices in system attributes that become design decisions unique to the context and objectives of the model. An ABM is inherently agent-centric, and the model arises from the consideration and definition of the agent's environment, the agent's characteristics, and the agent's interactions with other agents. • Commercial / Homegrown: At present a large number of ABMs are developed as one-offs or custom models, dedicated to the objective at hand. These offer advantages associated with data fusion, accelerators through multicore, cluster, high performance computing (HPC) optimization as well as general purpose computation on graphics processing units (GP-GPU). Disadvantages are the considerable overhead in developing one's own code, inclusive of code verification. The benefits of a commercial platform are a proven code base and user community. Just as with many other areas where simulation plays a crucial role in product development, eventually the benefits of a commercial product usually outweighs the advantages of a homegrown solution. There are however intermediary code bases that are typically open and community supported. These are usually verified to some degree but usually not to the degree of a commercial offering. All forms of ABM development have associated learning curves. The largest and most popular commercial ABM offering is that within AnyLogic (anylogic.com). Opensource ABM frameworks include Repast (http://repast.sourceforge.net/), NetLogo (http://ccl.northwestern.edu/netlogo/), and Swarm (swarm.org). • Environment: • The topography or layout upon which agents operate is an initial decision in ABM development. Environments can be real-world, synthesized, or abstracted. Real-world environment can be captured from hospital floor plans, while synthesized environments can be generated by the modeler with simplifications or assumptions compared to real floor plans. The environment can also be abstracted entirely as a data point in the overall model and assigning the agent to discrete non-physical locations within the computer code. However, the strong benefit of ABM is to allow for real-world environments to enhance the validity and credibility of the model, to ease the interpretation of simulation results, and to assist in knowledge transfer. • Most ABM simulation suites include some means of visualization of the agent within the environment, and this benefit of ABM over other modeling techniques has been accentuated with the affordability and accessibility of high performance desktop computing and graphical processing. Visualization of specific instances of the process allows verification of the model setup, simulation in progress, and simulation results. Where a simulation requires a very large number of iterations to generate meaningful findings, the visualization methods are halted while data accumulated. • Agents: • The selection of agents is a foundational task of the ABM developer. In most hospital ABMs, the logical selection of agents includes patients and hospital staff members. Basic ABMs for hospital EDs may only include patients, nurses, and physicians BIB002 , while more detailed ABMs include allied healthcare providers who also consult within a hospital, and potentially reaching as far as including visitors and facility personnel not directly involved in healthcare delivery (e.g. maintenance staff). Furthermore, an explicit decision should be made to include or exclude inanimate objects as agents within a hospital ABM. Where the ABM is developed to model infection spread (vs. patient flow), researchers have considered the role of equipment and hospital fixtures as vectors for infection BIB001 , including medical instruments, bed capacity, allied areas relevant to the main ABM focus (e.g. diagnostic services within an ED ABM). Inanimate agents are modeled without explicit agency or any decision making capability. Besides their role as vectors in infection spread, the availability and utilization strategies of inanimate agents (e.g. bed capacity, equipment availability) can also be illuminated via ABM. • The assignment of characteristics or profiles to the agents is another foundational task of the developer. The relevant factors for agent profiles are determined by the objective of the ABM and may include distributions of sex, age and other demographic factors, physical origin and destination within the topography and beyond the topography, and risk factors associated with, for example, infection spread. The power of ABM is accentuated within today's emerging big data culture, where the sources of real data for agent characteristics are numerous and varied. Data sets may or may not have been generated for the purpose at hand. Data sources may include hospital information systems, census data, government databases in the case of publicly-funded health systems (e.g. Canadian Institute for Health Information), cellular service records that can be used to approximate physical trajectories of Smartphone users upon a topography, and even Smartphone apps that are GPS-enabled. The developer must be aware of limitations and gaps within the data and how those limitations impact the veracity of the dataset for the ABM's objective. Pre-processing is generally required for a single dataset as well as the consolidation of varied datasets. While data is often technically available, political barriers may exist to access the data. The area of real data is likely the area where ABMs within healthcare facilities will more fully evolve as they install in-house systems to capture the data (e.g. patient flows) themselves, which will support the ability to fine-tune ABMs. Such systems may include electronic records, dashboards, as well as technologies such as RFID. In the case of RFID, both inanimate and animate agents can be tracked. • The assignment of rules that govern the interactions between agents is the other foundational task of the ABM developer, in order to capture the processes within the ABM, i.e. the process within the hospital. Here, the ABM's impact is evident in the natural inclusion of expert guidance to establish valid and reliable agent interaction rules, formulated directly in the lexicon of the hospital environment and in the real-world topography of the practitioners (e.g. nurses and physicians in the hospital). The role of real data in the assignment of agent behavioral rules is just as significant as in the assignment of agent characteristics or profiles. • Interventions: Whether the hospital ABM was developed to examine patient flow, infection spread dynamics, or another purpose, the key objective in developing an ABM is to introduce policy changes or interventions (agent profile changes, agent behavioral changes, topography changes, or others) in order to investigate ''what if'' scenarios. In patient flow ABMs, interventions may include topography re-configurations of the ED or procedural reorganization such as lowpriority patient diversions within and between hospitals. In an infection spread ABM, interventions may include agent hygiene behaviours and rules of contact. • Validation & Verification: There are emerging guidelines addressing the importance and techniques to validate ABMs BIB003 , including micro-face validation, macro-face validation, output validation, backcasting to known data, and comparison of output to other modeling methods.
A Survey of Agent-Based Modeling of Hospital Environments <s> C. AGENT BASED MODELS FOR PATIENT FLOWS <s> OBJECTIVES ::: The objectives of this project; (1) to evaluate the method, (2) to assess the information required for a more detailed model, and (3) to determine if it was worthwhile to undertake the data collection needed for a more detailed model. ::: ::: ::: METHODS ::: A mathematical model was constructed using the operational research method of discreet event simulation. The effect of different SHO shift patterns on waiting time was assessed with the model. ::: ::: ::: RESULTS ::: The model constructed was not an accurate representation of patient flow because of the large number of assumptions that had to be made in this preliminary model. However, the model predicted that an SHO shift pattern that more closely matched the patient arrival pattern would produce shorter waiting times. ::: ::: ::: CONCLUSIONS ::: This method can be applied to an accident and emergency department. Extension of this approach with the collection of additional data and the development of more sophisticated models seems worthwhile. <s> BIB001 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> C. AGENT BASED MODELS FOR PATIENT FLOWS <s> With increasing healthcare costs, an aging population, and a shortage of trained personnel it is becoming increasingly important for hospital pharmacy management to make good operational decisions. In the case of hospital in-patient pharmacies, making decisions about staffing and work scheduling is difficult due to the complexity of the systems used and the variation in the orders to be filled. In order to help BroMenn Healthcare make decisions about staffing and work scheduling a simulation model was created to analyze the impact of alternate work schedules. The model estimates the effect of changes to staffing and work scheduling on the amount of time medication orders take to process. The goal is to use the simulation to help BroMenn find the best schedule to get medications to the patients as quickly as possible by using pharmacy staff effectively. <s> BIB002 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> C. AGENT BASED MODELS FOR PATIENT FLOWS <s> Emergency department overcrowding is a problem that threatens the public health of communities and compromises the quality of care given to individual patients. The Institute of Medicine recommends that hospitals employ information technology and operations research methods to reduce overcrowding. This paper describes the development of an agent based simulation tool that has been designed to evaluate the impact of various physician staffing configurations on patient waiting times in the emergency department. We evaluate the feasibility of this tool at a single hospital emergency department. <s> BIB003 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> C. AGENT BASED MODELS FOR PATIENT FLOWS <s> Healthcare is a complex adaptive system. This paper discusses, healthcare in the context of complex systems architecture and an agent based modeling framework. The paper demonstrates complications of healthcare system improvement and it’s impact on patient safety, economics and workloads. Further an application of safety dynamics model proposed by Cook and Rasmussen4 is explored using a hypothetical simulation of an emergency department. By means of simulation, this paper demonstrates the nonlinear behaviors of a health service unit and its complexities; and how the safety dynamic model may be used to evaluate various aspects of healthcare. Further work is required to apply this concept in a ‘real life environment’ and its consequence to societal, organizational and operational levels of healthcare. <s> BIB004 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> C. AGENT BASED MODELS FOR PATIENT FLOWS <s> Modeling and simulation have been shown to be useful tools in many areas of the Healthcare operational management, field in which there is probably no area more dynamic and complex than hospital emergency departments (ED). This paper presents the results of an ongoing project that is being carried out by the Research Group in Individual Oriented Modeling (IoM) of the University Autonoma of Barcelona (UAB) with the participation of Hospital of Sabadell ED Staff Team. Its general objective is creating a simulator that, used as decision support system (DSS), aids the heads of the ED to make the best informed decisions possible. The defined ED model is a pure Agent-Based Model, formed entirely of the rules governing the behavior of the individual agents which populate the system. Two distinct types of agents have been identified, active and passive. Active agents represent human actors, meanwhile passive agents represent services and other reactive systems. The actions of agents and the communication between them will be represented using Moore state machines extended to include probabilistic transitions. The model also includes the environment in which agents move and interact. With the aim of verifying the proposed model an initial simulation has been created using NetLogo, an agent-based simulation environment well suited for modeling complex systems. <s> BIB005 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> C. AGENT BASED MODELS FOR PATIENT FLOWS <s> This article presents an agent-based modeling and simulation to design a decision support system for healthcare emergency department (ED) to aid in setting up management guidelines to improve it. This ongoing research is being performed by the Research Group in Individual Oriented Modeling at the Universitat Autonoma de Barcelona with close collaboration of the hospital staff team of Sabadell. The objective of the proposed procedure is to optimize the performance of such complex and dynamic healthcare EDs, which are overcrowded. Exhaustive search optimization is used to find the optimal ED staff configuration, which includes doctors, triage nurses, and admission personnel, i.e., a multi-dimensional and multi-objective problem. An index is proposed to minimize patient stay time in the ED. The model is implemented using NetLogo. The results obtained by using alternatives Monte Carlo and Pipeline schemes are promising. The impact of these schemes to reduce the computational resources used is described. <s> BIB006 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> C. AGENT BASED MODELS FOR PATIENT FLOWS <s> Abstract The increasing demand of urgent care, overcrowding of hospital emergency departments (ED) and limited economic resources are phenomena shared by health systems around the world. It is estimated that up to 50% of patients that are attended in ED have non complex conditions that could be resolved in ambulatory care services. The derivation of less complex cases from the ED to other health care devices seems an essential measure to allocate properly the demand of care service between the different care units. This paper presents the results of an experiment carried out with the objective of analyzing the effects on the ED (patients’ Length of Stay, the number of patients attended and the level of activity of ED Staff) of different derivation policies. The experiment has been done with data of the Hospital of Sabadell (a big hospital, one of the most important in Catalonia, Spain), making use of an Agent-Based model and simulation formed entirely of the rules governing the behaviour of the individual agents which populate the ED, and due to the great amount of data that should be computed, using High Performance Computing. <s> BIB007 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> C. AGENT BASED MODELS FOR PATIENT FLOWS <s> Triage is a process of accessing patients on their severity based on a triage acuity scale in hospital emergency department (ED). Re-triage is a process where the severity of a patient's condition is reassessed when there is a clinical need for it. Re-triage does not feature in the conventional triage, where the patients with non-urgent consideration will have to wait to be treated on a first come first serve basis. In this study, we investigate the effect of re-triage on patients waiting time and on the ED service by means of agent-based modelling and simulation. The simulation is based on historical records of patients presenting to the ED of Hospital USM in the year 2011. The result of the simulation shows that the implementation of re-triage in the conventional three-level triage system can significantly Reduce the waiting time of patients with deteriorating clinical conditions, with slight increase in the demand for ED service due to the re-triage activity. <s> BIB008 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> C. AGENT BASED MODELS FOR PATIENT FLOWS <s> Computer simulation studies of the emergency department (ED) are often patient driven and consider the physician as a human resource whose primary activity is interacting directly with the patient. In many EDs, physicians supervise delegates such as residents, physician assistants and nurse practitioners each with different skill sets and levels of independence. The purpose of this study is to present an alternative approach where physicians and their delegates in the ED are modeled as interacting pseudo-agents in a discrete event simulation (DES) and to compare it with the traditional approach ignoring such interactions. The new approach models a hierarchy of heterogeneous interacting pseudo-agents in a DES, where pseudo-agents are entities with embedded decision logic. The pseudo-agents represent a physician and delegate, where the physician plays a senior role to the delegate (i.e. treats high acuity patients and acts as a consult for the delegate). A simple model without the complexity of the ED is first created in order to validate the building blocks (programming) used to create the pseudo-agents and their interaction (i.e. consultation). Following validation, the new approach is implemented in an ED model using data from an Ontario hospital. Outputs from this model are compared with outputs from the ED model without the interacting pseudo-agents. They are compared based on physician and delegate utilization, patient waiting time for treatment, and average length of stay. Additionally, we conduct sensitivity analyses on key parameters in the model. In the hospital ED model, comparisons between the approach with interaction and without showed physician utilization increase from 23% to 41% and delegate utilization increase from 56% to 71%. Results show statistically significant mean time differences for low acuity patients between models. Interaction time between physician and delegate results in increased ED length of stay and longer waits for beds. This example shows the importance of accurately modeling physician relationships and the roles in which they treat patients. Neglecting these relationships could lead to inefficient resource allocation due to inaccurate estimates of physician and delegate time spent on patient related activities and length of stay. <s> BIB009
An evolving literature exists with respect to applying ABMs, alone or in complement to other techniques, to the operations of EDs. In general, this literature addresses system-level performance dynamics, quantified in terms of patient safety BIB004 , economic indicators BIB004 , , staff workload and scheduling BIB003 , BIB001 , BIB002 , and patient flows. While much of the literature addresses system-level operational concerns during periods of typical operation or stasis, there is also a literature on modeling of healthcare operations during critical incidents like disease outbreaks and terrorist attacks . More recently, others have modeled improvements to patient flow using an ABM running on a High Performance Compute resource BIB007 . The ABM was built with NetLogo and representative of the role for which an ABM is well suited. The objective of the study was to provide impact values of alternative policies for patient diversion. Not unexpectantly, the results indicated that patients that do not require urgent attention and are fast tracked or diverted improves the capacity of the ED and reduces the Length of Stay of patients that remain in the ED. More extensive consideration of ABMs for patient flow in EDs are developed by the same researchers , BIB006 , including the utilization of an ABM within a decision support system for EDs BIB005 . A contrasting technique to model patient flows would be DES BIB001 . In similar work, the role of re-triage in improving ED patient flow is examined BIB008 . The results are not unexpected and lend additional credibility to the use of ABMs in healthcare modeling by facilitating and modeling ''what if'' scenarios. In other work, a pseudo-agent based approach is introduced into a DES in an attempt to capture the representative strengths of each modeling approach for simulating an emergency department BIB009 . In that work, the importance of interaction at the agent level is illustrated, not typically captured with DES. Fig. 1 illustrates where modeling nosocomial infections, or hospital-acquired infections (HAIs) would be characterized within the range of healthcare models.
A Survey of Agent-Based Modeling of Hospital Environments <s> D. AGENT BASED MODELS FOR NOSOCOMIAL INFECTIONS <s> To investigate the transmission of influenza viruses via hands and environmental surfaces, the survival of laboratory-grown influenza A and influenza B viruses on various surfaces was studied. Both influenza A and B viruses survived for 24-48 hr on hard, nonporous surfaces such as stainless steel and plastic but survived for less than 8-12 hr on cloth, paper, and tissues. Measurable quantities of influenza A virus were transferred from stainless steel surfaces to hands for 24 hr and from tissues to hands for up to 15 min. Virus survived on hands for up to 5 min after transfer from the environmental surfaces. These observations suggest that the transmission of virus from donors who are shedding large amounts could occur for 2-8 hr via stainless steel surfaces and for a few minutes via paper tissues. Thus, under conditions of heavy environmental contamination, the transmission of influenza virus via fomites may be possible. <s> BIB001 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> D. AGENT BASED MODELS FOR NOSOCOMIAL INFECTIONS <s> Objective:To develop and disseminate a spatially explicit model of contact transmission of pathogens in the intensive care unit.Design:A model simulating the spread of a pathogen transmitted by direct contact (such as methicillin-resistant Staphylococcus aureus or vancomycin-resistant Enterococcus) <s> BIB002 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> D. AGENT BASED MODELS FOR NOSOCOMIAL INFECTIONS <s> Background and objectives: Infectious diseases and antimicrobial-resistant microorganisms are a growing problem for the dialysis population. The frequency of patient visits and intimate, prolonged physical contact with the inanimate environment during dialysis treatments make these facilities potentially efficient venues for nosocomial pathogen transmission. Isolation measures and infection control practices can be inconvenient and consume limited resources. Quantitative tools for analyzing the effects of different containment strategies can help to identify optimal strategies for further study. However, spatial and temporal considerations germane to the dialysis unit greatly complicate analyses relying on conventional mathematical approaches. Design, setting, participants, & measurements: A stochastic, individual-based, Monte Carlo simulation tool that predicts the effects of various infection control strategies on pathogen dissemination through the dialysis unit in the face of diagnostic uncertainty was developed. The model was configured to emulate a medium-sized dialysis unit. The predicted consequences of various policies for scheduling patients who were suspected of being infectious were then explored, using literature-based estimates of pathogen transmissibility, prevalence, and diagnostic uncertainty. Results: Environmental decontamination was predicted to be of paramount importance in limiting pathogen dissemination. Temporal segregation (scheduling patients who were suspected of being infectious to dialysis shifts that are later in the day) was predicted to have the greatest effectiveness in reducing transmission, given adequate environmental decontamination between successive days. Conclusions: Decontamination of the patient’s environment (chair) can markedly attenuate pathogen dissemination. Temporal segregation could be a simple, low-cost, system-level intervention with significant potential to reduce nosocomial transmission in the dialysis unit. <s> BIB003 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> D. AGENT BASED MODELS FOR NOSOCOMIAL INFECTIONS <s> Infections caused by antibiotic-resistant pathogens are a global public health problem. Numerous individual- and population-level factors contribute to the emergence and spread of these pathogens. An individual-based model (IBM), formulated as a system of stochastically determined events, was developed to describe the complexities of the transmission dynamics of antibiotic-resistant bacteria. To simplify the interpretation and application of the model's conclusions, a corresponding deterministic model was created, which describes the average behavior of the IBM over a large number of simulations. The integration of these two model systems provides a quantitative analysis of the emergence and spread of antibiotic-resistant bacteria, and demonstrates that early initiation of treatment and minimization of its duration mitigates antibiotic resistance epidemics in hospitals. <s> BIB004 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> D. AGENT BASED MODELS FOR NOSOCOMIAL INFECTIONS <s> Background Annual influenza vaccination of institutional health care workers (HCWs) is advised in most Western countries, but adherence to this recommendation is generally low. Although protective effects of this intervention for nursing home patients have been demonstrated in some clinical trials, the exact relationship between increased vaccine uptake among HCWs and protection of patients remains unknown owing to variations between study designs, settings, intensity of influenza seasons, and failure to control all effect modifiers. Therefore, we use a mathematical model to estimate the effects of HCW vaccination in different scenarios and to identify a herd immunity threshold in a nursing home department. <s> BIB005 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> D. AGENT BASED MODELS FOR NOSOCOMIAL INFECTIONS <s> Hospital patients who are colonised with methicillin-resistant Staphylococcus aureus (MRSA), may transmit the bacteria to other patients. An agent-based simulation is designed to determine how the problem might be managed and the risk of transmission reduced. Most MRSA modelling studies have applied mathematical compartmental models or Monte Carlo simulations. In the agent-based model, each patient is identified on admission as being colonised or not, has a projected length of stay and may be more or less susceptible to colonisation. Patient states represent colonisation, detection, treatment, and location within the ward. MRSA transmission takes place between pairs of individuals in successive time slices. Various interventions designed to reduce MRSA transmission are embedded in the model including: admission and repeat screening tests, shorter test turnaround time, isolation, and decolonisation treatment. These interventions can be systematically evaluated by model experimentation. <s> BIB006 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> D. AGENT BASED MODELS FOR NOSOCOMIAL INFECTIONS <s> Prevention and control of Healthcare Associated Infections (HAIs) has become a high priority for most healthcare organizations. Mathematical models can provide insights into the dynamics of nosocomial infections and help to evaluate the effect of infection control measures. The model presented in this paper adopts an individual-based and stochastic approach to investigate MRSA outbreaks in a hospital ward. A computer simulation was implemented to analyze the dynamics of the system associated with the spread of the infection and to carry out studies on space and personnel management. This study suggests that a strict spatial cohorting might be ineffective, if it is not combined with personnel cohorting. <s> BIB007 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> D. AGENT BASED MODELS FOR NOSOCOMIAL INFECTIONS <s> Abstract Computational models and simulations are commonly employed to aid decision making in two areas of health care management: optimization of the use of hospital resources and control of the spread of hospital-acquired infections caused by antibiotic-resistant pathogens. We propose a model that combines the operational and the epidemiologic perspectives to size up the effect of understaffing and overcrowding on nosocomial contagion in a intensive-care unit. Specifically, we develop an agent-based model simulating contact-mediated pathogen transmission which allows establishing quantitative relations between patient flow, nurse staffing conditions and pathogen colonization in patients. The results of the model, once calibrated with data from the literature, should indicate under which conditions the variation in pathogen transmission resulting from management decisions can lead to significant increases in the incidence of health care-associated infections in the intensive care unit. <s> BIB008 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> D. AGENT BASED MODELS FOR NOSOCOMIAL INFECTIONS <s> Serious infections due to antibiotic-resistant bacteria are pervasive, and of particular concern within hospital units due to frequent interaction among health-care workers and patients. Such nosocomial infections are difficult to eliminate because of inconsistent disinfection procedures and frequent interactions among infected persons, and because ill-chosen antibiotic treatment strategies can lead to a growth of resistant bacterial strains. Clinical studies to address these concerns have several issues, but chief among them are the effects on the patients involved. Realistic simulation models offer an attractive alternative. This paper presents a hybrid simulation model of antibiotic resistant infections in a hospital ward, combining agent-based simulation to model the inter-host interactions of patients and health-care workers with a detailed differential equations and probabilistic model of intra-host bacterial and antibiotic dynamics. Initial results to benchmark the model demonstrate realistic behavior and suggest promising extensions to achieve a highly-complex yet accurate mechanism for testing antibiotic strategies. <s> BIB009 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> D. AGENT BASED MODELS FOR NOSOCOMIAL INFECTIONS <s> BACKGROUND ::: Infectious individuals in an emergency department (ED) bring substantial risks of cross infection. Data about the complex social and spatial structure of interpersonal contacts in the ED will aid construction of biologically plausible transmission risk models that can guide cross infection control. ::: ::: ::: METHODS AND FINDINGS ::: We sought to determine the number and duration of contacts among patients and staff in a large, busy ED. This prospective study was conducted between 1 July 2009 and 30 June 2010. Two 12-hour shifts per week were randomly selected for study. The study was conducted in the ED of an urban hospital. There were 81 shifts in the planned random sample of 104 (78%) with usable contact data, during which there were 9183 patient encounters. Of these, 6062 (66%) were approached to participate, of which 4732 (78%) agreed. Over the course of the year, 88 staff members participated (84%). A radiofrequency identification (RFID) system was installed and the ED divided into 89 distinct zones structured so copresence of two individuals in any zone implied a very high probability of contact <1 meter apart in space. During study observation periods, patients and staff were given RFID tags to wear. Contact events were recorded. These were further broken down with respect to the nature of the contacts, i.e., patient with patient, patient with staff, and staff with staff. 293,171 contact events were recorded, with a median of 22 contact events and 9 contacts with distinct individuals per participant per shift. Staff-staff interactions were more numerous and longer than patient-patient or patient-staff interactions. ::: ::: ::: CONCLUSIONS ::: We used RFID to quantify contacts between patients and staff in a busy ED. These results are useful for studies of the spread of infections. By understanding contact patterns most important in potential transmission, more effective prevention strategies may be implemented. <s> BIB010
In general, the modeling of HAI or nosocomial infections is perhaps the best suited area for ABMs within healthcare institutions. This is largely a consequence of being able to address all of the model components included in Fig. 1 FIGURE 1 . An agent based nosocomial model within healthcare models. relative to topography and agents. HAI ABMs may be useful in assessing the effectiveness of different infection control protocols or policies, intervention costs, as well as shedding light on potential confinement failures which would accompany widespread infection dynamics BIB003 , BIB002 . Several of the models oriented to nosocomial infections are known as 'individual based models', in which agents are limited by definition to individuals (persons). One such example is a mathematical individual based model for studying infection spread in a nursing home BIB005 . By contrast, the notion of an agent based model expands the definition of agent beyond an individual person, to include inanimate objects that can act as vectors of transmission for nosocomial infections. This concept is supported by a significant body of evidence that non-person agents play a significant role as infection transmission vectors BIB001 , , including the CDC's overview of SARS related information [31] , which states ''the virus also can spread when a person touches a surface or object contaminated with infectious droplets and then touches his or her mouth, nose, or eye(s). In addition, it is possible that the SARS virus might spread more broadly through the air (airborne spread) or by other ways that are not now known'' (pp. 1). Nosocomial agent based modeling initiatives focus upon explicit modeling of structure and behaviour extending the agent based model to include individuals, inanimate objects, and locations, in order to investigate an organization's policies and practices in the event of a serious nosocomial infection outbreak. Much of the current efforts in nosocomial ABMs set the framework for potential future efforts in modeling and evaluation of organization's documented infection control plans (policies and practices). For example, best practices [32] are available for healthcare practitioners and policy makers dealing with health care-associated infections in patient and resident populations. This may be a useful reference to model, as a means of identifying and evaluating their effectiveness. At this time, best practice documents typically reflect ''consensus positions on what the committee deems prudent practice and are made available as a resource to the public health and healthcare provider'' (p. ii). Clearly this is also an opportunity for ABM models to contribute to a collaborative, multi-stakeholder effort. Despite nosocomial modeling's natural fit with the ABM approach, it is a fairly recent area of exploration for healthcare ABMs - BIB006 . One of the earlier simulation efforts modeled antibiotic resistance in hospitals, contrasting and an individual based model with that of a differential equation based model, including consideration of where they can be used in conjunction with one another BIB004 . Another study investigated the spread of a nosocomial pathogen in a dialysis unit using a Monte Carlo individual based model BIB003 . The dialysis unit is a very good example of where agent based models may be particularly useful as ''the frequency of patient visits and intimate, prolonged physical contact with the inanimate environment during dialysis treatments make these facilities potentially efficient venues for nosocomial pathogen transmission'' (pp. 1176). In related paper BIB002 , the same authors developed a fairly abstracted nosocomial ABM within an intensive care unit, advocating for a ''conceptually simple discrete element (agent-based or cellular automata) models [that] can explicitly address 'geographic' considerations and probabilistic transmission dynamics germane to the spatially intricate environments and small population sizes characteristic of ICUs'' (pp. 174). In another nosocomial ABM of an intensive care unit, operational and epidemiological features are considered in an attempt to estimate the effect of understaffing and overcrowding on infection spread BIB008 . The ABM simulated contact-mediated pathogen transmission, which should allow one to establish quantitative relations between patient flow, staffing conditions and pathogen colonization in patients. Another individual based approach investigated the role of cohorting, with the aim of minimizing the possible interactions between individuals within a ward BIB007 . In a relatively recent nosocomial ABM, a combination of differential equation models and probabilistic models are used for each agent in order to simulate changes, over time, in the bacteria sub-populations within the agent's body BIB009 . As with many ABM efforts, work is ongoing in terms of validation and verification. In order to construct biologically plausible transmission risk models that can guide cross infection control, researchers have developed an RFID tracking system in an ED by which to extract agent contact data on the understanding of the critical role that contact patterns play in cross-infection control BIB010 . This type of high-fidelity individual data, topography, as well as contact patterns is ideally suited for an ABM as well.
A Survey of Agent-Based Modeling of Hospital Environments <s> IV. ENHANCEMENTS TO HOSPITAL ABMs <s> This paper presents an agent based modeling tool to assist in the deployment of RFID based tracking systems in healthcare facilities. The environment modeled here is an emergency department, with emphasis on patient tracking. The focus of the work is to quantify and assess the uncertainty and error associated with RFID tracking systems. The work extends the utility of RFID systems beyond asset and inventory control to patient tracking and highlights uncertainty as a critical issue in the data obtained via RFID tracking systems. <s> BIB001 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> IV. ENHANCEMENTS TO HOSPITAL ABMs <s> Science is on the verge of practical agent based modeling decision support systems capable of machine learning for healthcare policy decision support. The details of integrating an agent based model of a hospital emergency department with a genetic programming machine learning system are presented in this paper. A novel GP heuristic or extension is introduced to better represent the Markov Decision Process that underlies agent decision making in an unknown environment. The capabilities of the resulting prototype for automated hypothesis generation within the context of healthcare policy decision support are demonstrated by automatically generating patient flow and infection spread prevention policies. Finally, some observations are made regarding moving forward from the prototype stage. <s> BIB002 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> IV. ENHANCEMENTS TO HOSPITAL ABMs <s> In this study we present the development of a conceptual framework for a hybrid agent-based discrete event simulation model within the context of emergency medical services. There are many existing simulation models of emergency medical services (EMS), but each is considered in isolation, rather than as a single node in a complex web of regional EMS. The aim of this research is to develop a hybrid approach tool, using distributed simulation technology, that would enable interactions between existing EMS models and so provide an integrated network of the different components of the EMS. This would provide possibilities for integrated efficiency improvement scenarios and system analysis. This concept is illustrated using an agent-based ambulance service model and a discrete event EMS model. The advantages of such a technique is that expensive and time-consuming models can be reused and expanded to incorporate influencing external factors. <s> BIB003 </s> A Survey of Agent-Based Modeling of Hospital Environments <s> IV. ENHANCEMENTS TO HOSPITAL ABMs <s> Abstract The Emergency Department (ED) is considered the most critical department in Jordanian hospitals. Crowdedness and long waiting times of patients in the ED are the most common harmonies problems that hospitals are suffering from. Thus, this study aims at reducing the average waiting time of the patient in the ED, improving the nurses’ utilization, and increasing the number of served patients. A cellular service system is proposed and utilized for developing ten nurse assignment configurations. Simulation is run for a one month period (672 h) each with 10 replicates to evaluate the performance measures for each configuration. The best scenario is then determined using aggressive formulation in Data Envelopment Analysis (DEA). The results showed that the best scenario depends on work load sharing assignments, which results in reducing patient’s average waiting time from 195 to 183 min, increasing the number of patients served from 8853to 8934 patients, and improving the nurses’ utilization from 52% to 62%. In conclusion, nurses’ flexibility in cellular service systems shall provide a great assistance to decision makers in hospitals when improving the performance of ED. <s> BIB004
ABMs tend to be labour intensive and are often deployed for specific experiments or studies. Although time consuming, they generate vast quantities of data for each run. Typically, the many runs are used to extract statistics that can be used to demonstrate the impact of the policy or intervention being simulated. This massive data generator also offers the potential to be mined and used in machine learning or pattern classification algorithms. For example, instead of having emergency physicians travel through the ED to see patients in individual treatment rooms, the patients would travel through the ED to visit the (stationary) physician, with this policy generated via a genetic program combined with an ED ABM BIB002 . Another enhancement to ABMs and to simulation in general arises when data analysis augments the simulation. For example, researchers have analyzed data to identify best scenarios extracted from discrete event simulation of an ED BIB004 . Although scenarios were extracted from a DES as compared to an ABM, the same type of enhanced data analysis are beginning to emerge in ABM, borrowing heavily from nonparametric methods in operations research. Although ABMs are a useful paradigm for aiding to the understanding of a complex system on their own, this significant existing contribution will be augmented with the integration of data analytics. ABMs may also be useful in hospital facility design where additional importance may concern the role of the HVAC system within various departments. This would imply a hybrid of simulation techniques, likely encompassing an ABM and computational fluid dynamics model. In another instance, a hybrid ABM-DES model for emergency medical services in a city is conjectured, although an actual model or simulation has not been reported BIB003 . In a more pedestrian optimization, resource planning for placement of RFID readers in an RFID system may be integrated into the ABM as a means of estimating errors associated with the tracking system BIB001 .
Mid Sweden University - A Survey of Wireless Sensor Networks for Home Healthcare Monitoring Application <s> Medical Status Monitoring Applications <s> Sensors, which can be integrated into clothing and used to measure biochemical changes in body fluids, such as sweat, constitute a major advancement in the area of wearable sensors. Initial applications for such technology exist in personal health and sports performance monitoring. However, sample collection is a complicated matter as analysis must be done in real-time in order to obtain a useful examination of its composition. This work outlines the development of a textile-based fluid handling platform which uses a passive pump to gather sweat and move it through a pre-defined channel for analysis. The system is tested both in vitro and in vivo. In addition, a pH sensor, which depends on the use of a pH sensitive dye and paired emitter-detector LEDs to measure colour changes, has been developed. In vitro and on-body trials have shown that the sensor has the potential to record real-time variations in sweat during exercise. <s> BIB001 </s> Mid Sweden University - A Survey of Wireless Sensor Networks for Home Healthcare Monitoring Application <s> Medical Status Monitoring Applications <s> The incorporation of amperometric sensors into clothing through direct screen-printing onto the textile substrate is described. Particular attention is given to electrochemical sensors printed directly on the elastic waist of underwear that offers tight direct contact with the skin. The textile-based printed carbon electrodes have a well-defined appearance with relatively smooth conductor edges and no apparent defects or cracks. Convenient voltammetric and chronoamperometric measurements of 0–3 mM ferrocyanide, 0–25 mM hydrogen peroxide, and 0–100 μM NADH have been documented. The favorable electrochemical behavior is maintained under folding or stretching stress, relevant to the deformation of clothing. The electrochemical performance and tolerance to mechanical stress are influenced by the physical characteristics of the textile substrate. The results indicate the potential of textile-based screen-printed amperometric sensors for future healthcare, sport or military applications. Such future applications would benefit from tailoring the ink composition and printing conditions to meet the specific requirements of the textile substrate. <s> BIB002
The home care monitoring application area, most of studies focus on medical status monitoring which collects the vital signs of subject in order to allow user to understand their health status and make caregivers easy to manage the medical information. CodeBlue is developed by Harvard Sensor Networks Lab which is a wireless sensor for medical care application including both hardware platforms and software platforms. Wireless pulse oximeter sensor and wireless two-lead ECG sensor is used based on the TinyOS operating system, so that it is supported to monitor a variety of physiological parameters include heart rate (HR), oxygen saturation (SpO2), and ECG data. The user can get their health status without going to the hospital. The software platforms support PDAs, PCs and other devices which make the user and caregiver to manage these medical data easily. CodeBlue system also supports indoor and outdoor location tracking, so that it can be used to identify user's location especially helpful for the people with cognitive disabilities. With the development of biomedical sensor, there are a lot of physiological variables can be monitored. For example, in BIB002 ) BIB001 , by using biochemical sensing techniques, a textile-based wearable biosensor can monitor pH and sodium (Na+) when the user wear it.
Towards a Peer-to-Peer Energy Market: an Overview <s> VI. ACTIVITIES AND PILOT INSTALLATIONS OF P2P ENERGY MARKETS <s> Abstract This paper investigates the merits of a virtual aggregation of spare capacities from decentralized batteries installed in private households. To this end, we develop a simulation model that enables to take into account the prevailing grid- use tariffs, feed-in tariffs, and other parameters for an economic assessment of the viability of such an “energy storage cloud”. In the illustrative model application, we study the merits for four households in Germany over the period of one month by analyzing three distinct scenarios (one with no grid-use tariffs, one with unidirectional grid-use tariffs, and one with bidirectional grid-use tariffs). Three households are assumed to have hybrid PV-battery systems, whereas the fourth household aims at using the energy storage cloud for increasing its self-supply. We find that below a feed-in tariff of 10 €-ct/kWh, profits can indeed be made. The maximum net profit results in Scenario 1, where feed-in tariffs and grid-use costs are zero, and amounts to around €36 per month. <s> BIB001 </s> Towards a Peer-to-Peer Energy Market: an Overview <s> VI. ACTIVITIES AND PILOT INSTALLATIONS OF P2P ENERGY MARKETS <s> With the rapid growth of renewable energy resources, the energy trading began to shift from centralized to distributed manner. Blockchain, as a distributed public ledger technology, has been widely adopted to design new energy trading schemes. However, there are many challenging issues for blockchain-based energy trading, i.e., low efficiency, high transaction cost, security & privacy issues. To tackle with the above challenges, many solutions have been proposed. In this survey, the blockchain-based energy trading in electrical power system is thoroughly investigated. Firstly, the challenges in blockchain-based energy trading are identified. Then, the existing energy trading schemes are studied and classified into three categories based on their main focus: energy transaction, consensus mechanism, and system optimization. And each category is presented in detail. Although existing schemes can meet the specific energy trading requirements, there are still many unsolved problems. Finally, the discussion and future directions are given. <s> BIB002
Starting from some recent reviews about the Blockchain (BLC) technologies usage and perspectives for the energy domain BIB001 , , BIB002 , a set of 42 project related to P2P were identified. Five main aspects are considered in the analysis, namely, the (I) country where the activity is rooted, the (II) main focus of the project, the (III) geographic scope, the (IV) BLC technology and the (V) category of consensus algorithm adopted, currently (V.a) and in the future (V.b). For an initial classification of the countries where the activities originated, figure 8 provides a cumulative view. Due to space reasons, every country with just a single project is collected into the Others group. A clear interest in some European country is evident (in particular, the Netherlands forsaw this as one of the solutions towards a completely gasfree energy production). Also the United States of America and the centre of the EU (Germany and France) have a significant number of activities. Switzerland and UK have 3 reported projects each, where Australia, Belgium, Japan and Singapore present 2 entries each. Table II presents a division of this set based on the (II) type of application (main focus) of the project itself. The Smart Grid category groups projects where the attention is either on providing a P2P network detached from the traditional starshaped energy distribution or on designing the full architecture and the relevant assets for creating such a system. On contrast, P2P Platform represents activities that focuses on the energy trading platforms without an explicit connection to the energy measurement and the relevant oracles used to providing information to the blockchain. Can be noted that the majority of them focus around smart grid, whether about a quarter main objective is in P2P platform support. The remaining ones aim to different topics and is here collected under the Other class. Regarding the geographical scope of the activities (III) , table I reports the division into local, regional and national/global. Here is defined as local an activity that is limited to a small number of selected participants, located in close vicinity, such as for a neighbourhood or a small city district. These are typically small P2P communities especially formed for energy trading purposes. For the regional level, is taken the typical area of coverage of a LPD, such as cities and metropolitan areas, whether the national/global level covers multiple regional (or local) scopes. As potentially noted, the local scope is the prevalent focus for half of the reported exercises. This is also in accordance with the results for the application, as the smart grid focus is usually correlated with a local target, for easiness of introduction and to avoid conflicts with the current legal framework in the energy market. In fact 12 of of 16 projects with focus on smart grid have also a local deployment scope. Another interesting aspect is the prevalence of globally scoped activities over the regional ones, likely due to the broader expected impact of the project, whenever the regulating framework should anyway be taken into account. The next relevant aspect (IV) scrutinises which DLT is adopted for the project. As evident from figure 9, there a strong predominance of the Ethereum technology. A non-conclusive list of factor that can explain this phenomenon exists. Considering the relatively young age of this business field, the first players entering the domain are genrally perceived as most trustable and paving the path. In fact Ethereum got a significant traction in the early stage of DLT adoption, also because it presents very good documentation and a significant amount of well-designed and comprehensible examples for the most diffused functionalities. Cascade, whose adoption creates a vibrant and active community around the software, which guarantee continuous updates and easier access to programmable ready use for the underlying protocol. This is also an implicit signal that adopting Ethereum will be less risky from the business point of view¡¡ as this interest will realistically support the assumption that the technology will be still in place and usable in a 5-years horizon. One definitive aspect that oriented the adoption towards Ethereum is the fact that the protocol natively supports the IRC token standard, making it very easy to generate the type of utility token needed based on the specific asset that should be covered. All the Bitcoin family does not offer natively such a functionality. This demonstrates that the native possibility of using smart contacts for the energy transactions is an important task. Other initiatives adopted the new concept of Multichain (an open infrastructure, where the different DLT solutions can coexist, also with the possibility of exchanging currency and tokens amongst them) for allowing a smooth potential integration of already existing local initiatives in the distributed P2P energy market. The fourth aspect took into consideration is probably the most critical issue up to date for the adoption of DLT solutions in energy market, and has to do with scalability and energy consumption for running the system. This feature is the consensus algorithm adopted. Figure 10 and figure 11 present respectively the current status and the future expected consensus approach that the analysed activities declare. It is noteworthy that in absence of information regarding these aspects in the documentation or publication from the project itself, we assumed that the "native" agreement approach from the chosen DLT is preserved. This analysis is not run at the level of the specific algorithm, but aggregating them based on the main underlying functioning mechanism. This is also useful to draw some general conclusions about the limitation and the offered properties. What can be noted here is the moving from a predominance of computationally intensive and energy voracious approaches towards more scalable algorithms that stress the recognition of nodes commitment to serve the DLT, in term of resources specifically and uniquely devoted to it. The current status, in fig. 10 , demonstrates the prevalence of In contrast fig.11 indicates that other approaches will be privileged in the future, in particular the family of PoS. Another notable aspect is the fact that proprietary or peculiar algorithms that are adopted for specific reasons, tends to stay in place along the lifespan of the project. As a final note, the predefined consensus approach of Ethereum is moving in the same direction, due to the request for a significant reduction of its energy consumption . This fact can be clearly read into the aggregated data from figure 12, where the relative frequency of the current and the future type of consensus mechanisms are compared. By looking only at the PoW and the PoS categories, this trend can be clearly seen, with the first reducing of 40% and the second increasing of 45%. This can partially be explained by the marketing willingness to present the project as interested into a broader profile of sustainability and scalability, but also by the new Ethereum 2.0, that will move the predefined consensus algorithm to a PoS solutions.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Congestion Control Operations of TCP <s> We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behaviors due to the different directions of the Internet paths, which often exhibit asymmetries. We characterize the prevalence of unusual network events such as out-of-order delivery and packet corruption; discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair"; investigate patterns of packet loss, finding that loss events are not well-modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Congestion Control Operations of TCP <s> It is a widely held belief that packet reordering in the Internet is a pathological behavior, or more precisely, that it is an uncommon behavior caused by incorrect or malfunctioning network components. Some studies of Internet traffic have reported seeing occasional packet reordering events and ascribed these events to "route fluttering", router "pauses" or simply to broken equipment. We have found, however, that parallelism in Internet components and links is causing packet reordering under normal operation and that the incidence of packet reordering appears to be substantially higher than previously reported. More importantly, we observe that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected. Perhaps the most disturbing observation about TCP's behavior is that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP. <s> BIB002 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Congestion Control Operations of TCP <s> Packet reordering in the Internet is a well-known phenomenon. As the delay and speed of backbone links continue to increase, what used to be a negligible amount of packet reordering may now, combined with some level of dropped packets, cause multiple invocations of fast recovery within a TCP window. This may result in a significant drop in link utilization and hence in application throughput. What adds to the difficulty is that packet reordering is a silent problem. It may result in significant application throughput degradation while leaving little to no trace. In this article we try to measure and quantify the effect of reordering packets in a backbone link that multiplexes multiple TCP flows on application throughput. Different operating systems and delay values as well as various types of flow mixes were tested in a laboratory setup. The results show that only a small percentage of reordered packets, by at least three packet locations, in a backbone link can cause significant degradation of application throughput. Long flows are affected most. Due to the potential impact of this phenomenon, minimization of packet reordering as well as mitigating the effect algorithmically should be considered. <s> BIB003
To achieve good performance, it is necessary to control network congestion so that the number of packets within the Internet is below the level at which the network performance drops significantly. Various congestion control measures have been implemented in TCP to limit the sending rate of data entering the Internet by regulating the size of the congestion window cwnd, the number of unacknowledged segments allowed to be sent. These measures include slow start, congestion avoidance, fast retransmit, and fast recovery. When a new connection is established, TCP sets cwnd to one. In slow start, the value of cwnd is incremented by one each time an ACK is received until it reaches the slow start threshold, ssthresh. TCP uses segment loss as an indicator of network congestion. To characterize a segment as being lost in transit, a source has to wait long enough without receiving an ACK for the segment. Therefore, a retransmission timer is associated with each transmitted segment and a timer timeout signals a segment loss. The retransmission timeout period (RTO) is determined by the sum of the smoothed exponentially weighted moving average and a multiple of the mean deviation of RTT . When a timeout occurs, ssthresh is set to half of the amount of outstanding data sent to the network. The slow start process is performed starting with cwnd equal to one until cwnd approaches ssthresh. The congestion avoidance phase is then carried out where cwnd is increased by one for each RTT. When the data octet number of an arriving segment is greater than the expected one, the destination finds a gap in the sequence number space (known as a sequence hole) and thus immediately sends out a duplicate ACK, i.e., an ACK with the same next expected data octet number in the cumulative acknowledgement field, to the source. If the communication channel is an in-order channel, the reception of a duplicate ACK implies the loss of a segment. When the source receives three duplicate ACKs, fast retransmit is triggered such that the inferred loss segment is retransmitted before the expiration of the retransmission timer. Fast recovery works as a companion of fast retransmit. A fast retransmission suggests the presence of mild network congestion. ssthresh is set to half of the amount of outstanding data sent to the network. Since the reception of a duplicate ACK indicates the departure of a segment from the network, cwnd is set to the sum of ssthresh and the number of duplicate ACKs received. When an ACK for a new segment arrives, cwnd is reset to ssthresh and then congestion avoidance takes place. Packet reordering refers to the network behavior where the relative order of some packets in the same flow 2 is altered when these packets are transported in the network. In other words, the receiving order of a flow of packets (or segments) differs from its sending order. Recent studies BIB002 , BIB001 show that packet reordering is not a rare event. The presence of persistent and substantial packet reordering violates the in-order or near in-order channel assumption made in the design principles of some traffic control mechanisms in TCP. This can result in a substantial degradation in application throughput and network performance BIB003 .
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> CAUSES OF PACKET REORDERING <s> Abstract Dispersity routing distributes the data between a source and destination over several paths through the network, rather than concentrating it on a single path. Non-redundant and redundant dispersity routing techniques are described. By using dispersity routing on virtual circuits, that operate similar to the TASI circuits used in voice networks, long, bursty data sources can share channels without buffering in the network or resequencing packets. This sharing ability is demonstrated by an example that has characteristics and requirements similar to those in medical image transmission. Dispersity routing is better able to deal with unexpected network loads than conventional, single channel systems. This ability is demonstrated by allowing a rogue source to upset the expected statistical utilization of the network. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> CAUSES OF PACKET REORDERING <s> We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behaviors due to the different directions of the Internet paths, which often exhibit asymmetries. We characterize the prevalence of unusual network events such as out-of-order delivery and packet corruption; discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair"; investigate patterns of packet loss, finding that loss events are not well-modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales. <s> BIB002 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> CAUSES OF PACKET REORDERING <s> It is a widely held belief that packet reordering in the Internet is a pathological behavior, or more precisely, that it is an uncommon behavior caused by incorrect or malfunctioning network components. Some studies of Internet traffic have reported seeing occasional packet reordering events and ascribed these events to "route fluttering", router "pauses" or simply to broken equipment. We have found, however, that parallelism in Internet components and links is causing packet reordering under normal operation and that the incidence of packet reordering appears to be substantially higher than previously reported. More importantly, we observe that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected. Perhaps the most disturbing observation about TCP's behavior is that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP. <s> BIB003 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> CAUSES OF PACKET REORDERING <s> Nodes in mobile ad hoc networks communicate with one another via packet radios on wireless multihop links. Because of node mobility and power limitations, the network topology changes frequently. Routing protocols therefore play an important role in mobile multihop network communications. A trend in ad hoc network routing is the reactive on-demand philosophy where routes are established only when required. Most of the protocols in this category, however, use a single route and do not utilize multiple alternate paths. We propose a scheme to improve existing on-demand routing protocols by creating a mesh and providing multiple alternate routes. Our algorithm establishes the mesh and multipaths without transmitting any extra control message. We apply our scheme to the Ad-hoc On-Demand Distance Vector (AODV) protocol and evaluate the performance improvements by simulation. <s> BIB004 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> CAUSES OF PACKET REORDERING <s> This article surveys wireless Internet technologies whose goals are to enhance networking performance. These technologies are organized into seven categories: power saving, mobile performance, Multimedia Quality-of-Service, application performance, transport-layer characteristics, data-link layer, and non-TCP options. For each category, the main technical characteristics are outlined, the architectural aspects are discussed, and the advantages and disadvantages are analyzed. The objective of this article is to contribute to the overall understanding of the technologies available for constructing the forthcoming wireless Internet infrastructure. <s> BIB005 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> CAUSES OF PACKET REORDERING <s> Research on multipath routing protocols to provide improved throughput and route resilience as compared with single-path routing has been explored in details in the context of wired networks. However, multipath routing mechanism has not been explored thoroughly in the domain of ad hoc networks. In this paper, we analyze and compare reactive single-path and multipath routing with load balance mechanisms in ad hoc networks, in terms of overhead, traffic distribution and connection throughput. The results reveals that in comparison with general single-path routing protocol, multipath routing mechanism creates more overheads but provides better performance in congestion and capacity provided that the route length is within a certain upper bound which is derivable. The analytical results are further confirmed by simulation. <s> BIB006 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> CAUSES OF PACKET REORDERING <s> In this paper, we propose a framework to study how to effectively perform load sharing in multipath communication networks. A generalized load sharing (GLS) model has been developed to conceptualize how traffic is split ideally on a set of active paths. A simple traffic splitting algorithm, called packet-by-packet weighted fair routing (PWFR), has been developed to approximate GLS with the given routing weight vector by transmitting each packet as a whole. We have developed some performance bounds for PWFR and found that PWFR is a deterministically fair traffic splitting algorithm. This attractive property is useful in the provision of service with guaranteed performance when multiple paths can be used simultaneously to transmit packets which belong to the same flow. Our simulation studies, based on a collection of Internet backbone traces, reveal that PWFR outperforms two other traffic splitting algorithms, namely, packet-by-packet generalized round robin routing (PGRR), and packet-by-packet probabilistic routing (PPRR) <s> BIB007
There are five major causes of packet reordering: packetlevel multipath routing, route fluttering, inherent parallelism in modern high-speed routers, link-layer retransmissions, and router forwarding lulls. . Packet-Level Multipath Routing: Multipath routing BIB007 , BIB001 is a load balancing traffic engineering technique to spread the traffic load across the network in order to alleviate network congestion. It has been shown BIB004 , BIB006 that multipath routing balances the load significantly better than singlepath routing and provides better performance in congestion and capacity over wired/wireless networks. Packet-level multipath routing allows packets of the same traffic flow to be forwarded over multiple routes to a destination so as to achieve load balancing in packet-switching networks. This functionality is supported by overlay networks. However, these packets may be reordered on arrival at the destination due to the differences in path delays. . Route Fluttering: Routing fluttering is a network phenomenon in which the forwarding path to a certain destination oscillates among a set of available routes to that destination. This results from route instability due to shaky links, and heavy loads or bursty traffic where the link cost metrics used in the routing algorithms are related to delays or congestion experienced over the network links. This also results in topological changes in the wireless environment. For example, mobile ad hoc networks are associated with no fixed infrastructure and every mobile node can be a source, a destination, or a router. Similar to packet-level multipath routing, route fluttering causes packets to be forwarded on different paths and arrive at a destination out of order. . Inherent Parallelism in Modern High-Speed Routers: Modern routers support packet striping so that packets of the same traffic flow can be forwarded over lower-capacity, but much cheaper multiple parallel links connecting to the next-hop router for that flow. To switch packets at high speed, this router is generally work conserving so that its outgoing ports connecting to a certain next-hop router are idle only when there is no outstanding packets to be forwarded to that router. Since packets may be of different sizes and the links can be of different bandwidths, packets may take dramatically different times to transmit, and hence arrive at the neighboring router in a different order from they are sent. 1. A cumulative ACK is an ACK that uses the cumulative ACK field in the TCP header to acknowledge all in-sequence data received by the destination. 2. The term "flow" is used in a very general manner here. A flow can correspond to a stream of packets originating from one end system and departing at another. On the other hand, a flow can be a stream of packets arriving at and leaving from a switch buffer. The use of multiple inexpensive application specific integrated circuits (ASICs) for Internet Protocol (IP) forwarding gives rise to an opportunity to speed up the port forwarding speed. Thus, even when there is only a single outgoing port connecting to the next-hop router, packets processed by different ASICs can be reordered BIB003 . . Link-Layer Retransmissions: Link-layer retransmission mechanisms BIB005 have been proposed to efficiently recover transmission losses due to high channel error rates in wireless networks. Such retransmitted packets are sent only after the losses are detected. These packets may then be interspersed with other packets belonging to the same traffic flow. . Router Forwarding Lulls: Some routers can pause its forwarding activity for buffered packets when it processes a routing update. These buffered packets are interspersed with new arrivals, thus causing packet reordering BIB002 . To summarize, there is a correlation between the causes and characteristics of packet reordering. Packet-level multipath routing and router fluttering induce packet reordering due to the differences in path delays. The inherent parallelism in modern high-speed routers produces packet reordering because of the differences in queueing and/or transmission times. Link-layer retransmissions incur packet reordering since retransmitted packets are associated with an additional round-trip time over a link. Router forwarding lulls induce packet reordering due to interspersion of buffered packets with new arrivals when processing a routing update. Although they may all lead to persistent and substantial packet reordering, the different causes of packet reordering pose a challenge in the design of an efficient and effective solution that is able to solve different types of packet reordering.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> IMPACT OF PACKET REORDERING ON TCP <s> As a reliable, end-to-end transport protocol, the ARPA Transmission Control Protocol (TCP) uses positive acknowledgements and retransmission to guarantee delivery. TCP implementations are expected to measure and adapt to changing network propagation delays so that its retransmission behavior balances user throughput and network efficiency. However, TCP suffers from a problem we call retransmission ambiguity: when an acknowledgment arrives for a segment that has been retransmitted, there is no indication which transmission is being acknowledged. Many existing TCP implementations do not handle this problem correctly. This paper reviews the various approaches to retransmission and presents a novel and effective approach to the retransmission ambiguity problem. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> IMPACT OF PACKET REORDERING ON TCP <s> It is a widely held belief that packet reordering in the Internet is a pathological behavior, or more precisely, that it is an uncommon behavior caused by incorrect or malfunctioning network components. Some studies of Internet traffic have reported seeing occasional packet reordering events and ascribed these events to "route fluttering", router "pauses" or simply to broken equipment. We have found, however, that parallelism in Internet components and links is causing packet reordering under normal operation and that the incidence of packet reordering appears to be substantially higher than previously reported. More importantly, we observe that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected. Perhaps the most disturbing observation about TCP's behavior is that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP. <s> BIB002 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> IMPACT OF PACKET REORDERING ON TCP <s> This paper considers the potentially negative impacts of an increasing deployment of non-congestion-controlled best-effort traffic on the Internet. These negative impacts range from extreme unfairness against competing TCP traffic to the potential for congestion collapse. To promote the inclusion of end-to-end congestion control in the design of future protocols using best-effort traffic, we argue that router mechanisms are needed to identify and restrict the bandwidth of selected high-bandwidth best-effort flows in times of congestion. The paper discusses several general approaches for identifying those flows suitable for bandwidth regulation. These approaches are to identify a high-bandwidth flow in times of congestion as unresponsive, "not TCP-friendly", or simply using disproportionate bandwidth. A flow that is not "TCP-friendly" is one whose long-term arrival rate exceeds that of any conformant TCP in the same circumstances. An unresponsive flow is one failing to reduce its offered load at a router in response to an increased packet drop rate, and a disproportionate-bandwidth flow is one that uses considerably more bandwidth than other flows in a time of congestion. <s> BIB003 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> IMPACT OF PACKET REORDERING ON TCP <s> TCP performs poorly on paths that reorder packets significantly, where it misinterprets out-of-order delivery as packet loss. The sender responds with a fast retransmit though no actual loss has occurred. These repeated false fast retransmits keep the sender's window small, and severely degrade the throughput it attains. Requiring nearly in-order delivery needlessly restricts and complicates Internet routing systems and routers. Such beneficial systems as multi-path routing and parallel packet switches are difficult to deploy in a way that preserves ordering. Toward a more reordering-tolerant Internet architecture, we present enhancements to TCP that improve the protocol's robustness to reordered and delayed packets. We extend the sender to detect and recover from false fast retransmits using DSACK information, and to avoid false fast retransmits proactively, by adaptively varying dupthresh. Our algorithm is the first that adaptively balances increasing dupthresh, to avoid false fast retransmits, and limiting the growth of dupthresh, to avoid unnecessary timeouts. Finally, we demonstrate that TCP's RTO estimator tolerates delayed packets poorly, and present enhancements to it that ensure it is sufficiently conservative, without using timestamps or additional TCP header hits. Our simulations show that these enhancements significantly improve TCP's performance over paths that reorder or delay packets. <s> BIB004
TCP relies on the use of a cumulative ACK to announce the receipt of segment(s). The pace at which a source receives ACKs drives how fast it can inject TCP segments into the network and its associated destination. With persistent and substantial packet reordering, TCP spuriously retransmits segments, keeps its congestion window unnecessarily small, loses its ACK-clocking, and understates the estimated RTT (and, thus, RTO) BIB002 . These will be described in detail next: . Spurious Segment Retransmissions: Packet reordering causes the starting data octet number of some arriving segments to differ from the ones expected by a destination. In other words, the destination finds a sequence hole upon segment reception. It then generates duplicate ACKs and sends them to its associated source. When the source receives three such duplicate ACKs consecutively, an inferred loss segment (although there is actually no loss) is retransmitted. Persistent and substantial packet reordering often causes some TCP segments to be retransmitted spuriously and unnecessarily, leading to classical congestion collapse BIB003 . . Keeping Congestion Window Unnecessarily Small: Fast recovery is always triggered with fast retransmit. A spurious fast retransmission not only generates additional yet unnecessary workload to the network and a destination, but also halves the congestion window. Thus, the congestion window is kept small relative to the available bandwidth of its transmission path, with persistent and substantial packet reordering. . Loss of ACK-Clocking: Packet reordering causes not only data segments, but also ACKs to arrive at a destination out of order. The former phenomenon is called forward-path reordering, while the latter is known as reverse-path reordering BIB002 . An illustration of forward-path reordering and reverse-path reordering is shown in Fig. 2 . Suppose segments are sent from the source in the order S1, S2, S3, but Segment S1 arrives after Segment S2 at the destination. This represents a forward-path reordering. ACK A1 arrives after ACKs A2 and A3 at the source. This depicts a reverse-path reordering. ACK-clocking or self clocking refers to the property that the receiver can generate ACKs no faster than data segments can get through the network . For forward-path reordering, an ACK for several new segments, which follows a number of duplicate ACKs, can in turn allow a source to inject several pending segments into the networks. Even when there is no data segment being reordered, disordered ACKs lead to a source transmitting several segments together rather than one or two segments per ACK. This causes loss of its ACKclocking and far more bursty traffic, which may lead to transient network congestion and congestion collapse from undelivered packets BIB003 . . Understating Estimated RTT and RTO: Whenever a segment is retransmitted, a source cannot determine whether a received ACK is for the first transmission or the retransmission of the segment. Karn's algorithm BIB001 alleviates the problem by discarding all measured RTT samples until an ACK acknowledges a segment that has not been retransmitted. Since a fast retransmission is likely to correspond to a segment that experiences a longer path delay, the use of Karn's algorithm results in a sampling bias against long RTT samples BIB004 . With persistent and substantial packet reordering, these samples would be discarded. The estimated RTT and RTO are therefore understated.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Eifel Algorithm <s> We propose an enhancement to TCP's error recovery scheme, which we call the Eifel algorithm. It eliminates the retransmission ambiguity, thereby solving the problems caused by spurious timeouts and spurious fast retransmits. It can be incrementally deployed as it is backwards compatible and does not change TCP's congestion control semantics. In environments where spurious retransmissions occur frequently, the algorithm can improve the end-to-end throughput by several tens of percent. An exact quantification is, however, highly dependent on the path characteristics over time. The Eifel algorithm finally makes TCP truly wireless-capable without the need for proxies between the end points. Another key novelty is that the Eifel algorithm provides for the implementation of a more optimistic retransmission timer because it reduces the penalty of a spurious timeout to a single (in the common case) spurious retransmission. <s> BIB001
Ludwig and Katz proposed the Eifel algorithm BIB001 to eliminate the retransmission ambiguity and solve the performance problems caused by spurious retransmissions. A source uses the TCP timestamp option to insert the current timestamp into the header of each outgoing segment to a destination. When the destination sends ACKs, it includes the corresponding timestamps into the ACKs. To eliminate the retransmission ambiguity, the source always stores the timestamp of the first retransmission of a segment. When the first ACK for the retransmitted segment arrives, the source compares the timestamp of that ACK with the stored timestamp. If the stored timestamp is greater, the retransmission is considered spurious. Fig. 4 illustrates how the Eifel algorithm works. When the source sends Segment S the first time at Time T1, it inserts the current timestamp T1 into the header of the segment. At Time T2, the source initiates a congestion response by retransmitting Segment S. The original segment differs with the retransmitted one as the latter one contains a timestamp T2 instead of T1. When the destination receives the original Segment S first, it sends an ACK with the timestamp of the segment, i.e., T1. When the ACK for the segment arrives, the source finds that the echoed timestamp, T1, is smaller than the stored one, T2. The retransmission is hence identified as spurious. To solve the problems caused by spurious retransmissions, a source also stores the current values of the slow start threshold, ssthresh, and the size of the congestion window, cwnd, when a segment is retransmitted the first time. When a detected spurious retransmission has resulted in a single retransmission of the oldest outstanding segment, the source restores ssthresh and cwnd to the stored values. It has been shown BIB001 that this technique is simple and effective in improving TCP performance with forwardpath reordering. However, bursts of TCP segments may be injected into the network when the state is restored. Besides, the scheme does not work when the original and retransmitted segments are reordered.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> TCP-DOOR <s> This paper considers the potentially negative impacts of an increasing deployment of non-congestion-controlled best-effort traffic on the Internet. These negative impacts range from extreme unfairness against competing TCP traffic to the potential for congestion collapse. To promote the inclusion of end-to-end congestion control in the design of future protocols using best-effort traffic, we argue that router mechanisms are needed to identify and restrict the bandwidth of selected high-bandwidth best-effort flows in times of congestion. The paper discusses several general approaches for identifying those flows suitable for bandwidth regulation. These approaches are to identify a high-bandwidth flow in times of congestion as unresponsive, "not TCP-friendly", or simply using disproportionate bandwidth. A flow that is not "TCP-friendly" is one whose long-term arrival rate exceeds that of any conformant TCP in the same circumstances. An unresponsive flow is one failing to reduce its offered load at a router in response to an increased packet drop rate, and a disproportionate-bandwidth flow is one that uses considerably more bandwidth than other flows in a time of congestion. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> TCP-DOOR <s> We propose an enhancement to TCP's error recovery scheme, which we call the Eifel algorithm. It eliminates the retransmission ambiguity, thereby solving the problems caused by spurious timeouts and spurious fast retransmits. It can be incrementally deployed as it is backwards compatible and does not change TCP's congestion control semantics. In environments where spurious retransmissions occur frequently, the algorithm can improve the end-to-end throughput by several tens of percent. An exact quantification is, however, highly dependent on the path characteristics over time. The Eifel algorithm finally makes TCP truly wireless-capable without the need for proxies between the end points. Another key novelty is that the Eifel algorithm provides for the implementation of a more optimistic retransmission timer because it reduces the penalty of a spurious timeout to a single (in the common case) spurious retransmission. <s> BIB002 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> TCP-DOOR <s> In a Mobile Ad Hoc Network (MANET), temporary link failures and route changes happen frequently. With the assumption that all packet losses are due to congestion, TCP performs poorly in such environment. While there has been some research on improving TCP performance over MANET, most of them require feedback from the network or the lower layer. In this research, we explore a new approach to improve TCP performance by detecting and responding to out-of-order packet delivery events, which are the results of frequent route changes. In our simulation study, this approach had achieved on average 50% performance improvement, without requiring feedback from the network or the lower layer. <s> BIB003
Wang and Zhang developed TCP with detection of out-oforder and response (TCP-DOOR) BIB003 , which can be considered as an extension of BIB002 . The out-of-order events are deemed to imply route changes in the networks, which happen frequently in mobile ad hoc networks. The TCP packet sequence number and ACK duplication sequence number, or current timestamps, are inserted into each data and ACK segment, respectively, to detect reordered data and ACK packets. When out-of-order events are detected, a source can either temporarily disable congestion control or perform recovery during congestion avoidance. By temporarily disabling congestion control, the source will maintain its state variable constant for a time period, say t 1 seconds, after detecting an out-of-order event. By instant recovery during congestion avoidance, the source recovers immediately to the state before the congestion response, which has been invoked within t 2 seconds ago. However, TCP-DOOR does not distinguish between forward-path reordering or reverse-path reordering. The responses are suitable to alleviate some performance problems caused by forward-path reordering. They do not help reduce bursty traffic, and in fact exaggerate network congestion under reverse-path reordering. Besides, TCP-DOOR does not perform well in a congested network environment with substantial persistent packet reordering. It disables congestion control for a time period every time an out-of-order event is detected, which may lead to congestion collapse from undelivered packets BIB001 .
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> DSACK TCP <s> Previous research indicates that packet reordering is not a rare event on some Internet paths. Reordering can cause performance problems for TCP's fast retransmission algorithm, which uses the arrival of duplicate acknowledgments to detect segment loss. Duplicate acknowledgments can be caused by the loss of a segment or by the reordering of segments by the network. In this paper we illustrate the impact of reordering on TCP performance. In addition, we show the performance of a conservative approach to "undo" the congestion control state changes made in conjunction with spurious retransmissions. Finally, we propose several alternatives to dynamically make the fast retransmission algorithm more tolerant of the reordering observed in the network and assess these algorithms. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> DSACK TCP <s> TCP performs poorly on paths that reorder packets significantly, where it misinterprets out-of-order delivery as packet loss. The sender responds with a fast retransmit though no actual loss has occurred. These repeated false fast retransmits keep the sender's window small, and severely degrade the throughput it attains. Requiring nearly in-order delivery needlessly restricts and complicates Internet routing systems and routers. Such beneficial systems as multi-path routing and parallel packet switches are difficult to deploy in a way that preserves ordering. Toward a more reordering-tolerant Internet architecture, we present enhancements to TCP that improve the protocol's robustness to reordered and delayed packets. We extend the sender to detect and recover from false fast retransmits using DSACK information, and to avoid false fast retransmits proactively, by adaptively varying dupthresh. Our algorithm is the first that adaptively balances increasing dupthresh, to avoid false fast retransmits, and limiting the growth of dupthresh, to avoid unnecessary timeouts. Finally, we demonstrate that TCP's RTO estimator tolerates delayed packets poorly, and present enhancements to it that ensure it is sufficiently conservative, without using timestamps or additional TCP header hits. Our simulations show that these enhancements significantly improve TCP's performance over paths that reorder or delay packets. <s> BIB002 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> DSACK TCP <s> In this paper, we propose a simple algorithm to adaptively adjust the value of dupthresh, the duplicate acknowledgement threshold that triggers the transmission control protocol (TCP) fast retransmission algorithm, to improve the TCP performance in a network environment with persistent packet reordering. Our algorithm uses an exponentially weighted moving average (EWMA) and the mean deviation of the lengths of the reordering events reported by a TCP receiver with the duplicate selective acknowledgement (DSACK) extension to estimate the value of dupthresh. We also apply an adaptive upper bound on dupthresh to avoid the retransmission timeout events. In addition, our algorithm includes a mechanism to exponentially reduce dupthresh when the retransmission timer expires. With these mechanisms, our algorithm is capable of converging to and staying at a near-optimal interval of dupthresh. The simulation results show that our algorithm improves the protocol performance significantly with minimal overheads, achieving a greater throughput and fewer false fast retransmissions. <s> BIB003
Floyd et al. discussed the use of duplicate selective acknowledgement (DSACK) to detect segment reordering and retract the associated spurious congestion response. DSACK is an extension of the selective acknowledgement (SACK) option [21] for TCP. It aims to use the SACK option for duplicate segments. The first block of the SACK option field is used to report the sequence numbers 3 of a received duplicate segment which has triggered the ACK. When congestion is detected, cwnd is saved before reduction. When a source finds that it has made a spurious congestion response based on the arrival of a DSACK, it performs slow start to increase the current cwnd to the stored cwnd before congestion avoidance. By performing slow start during state restoration, it allows TCP to reacquire ACK-clocking and avoid injecting traffic bursts into the network. Fig. 5 shows how DSACK is used to detect packet reordering. Suppose Segment S1 is reordered such that it arrives after Segment S4 at the destination. The last acknowledged segment is Segment S0. In this case, the destination sends out three duplicate ACKs A1, A2, and A3 (with the same cumulative ACK for Segment S0, although the SACK option fields differ) to the source so that Segment S1 is retransmitted (assuming dupthresh is three). When the destination receives the retransmitted Segment S1, it sends a duplicate ACK A5 for Segment S4, but the first block of the SACK option field acknowledges an arrival of a duplicate Segment S1. The source then knows that Segment S1 has been retransmitted spuriously due to packet reordering. This method can be easily coupled with a scheme using the DSACK information to adjust dupthresh to proactively prevent triggering spurious congestion responses in the future. The Blanton-Allman algorithm BIB001 , RR-TCP BIB002 , and the Leung-Ma Algorithm BIB003 , have adopted this technique for detection of forward-path reordering and state reconciliation.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Lee-Park-Choi Algorithm (Sender-Side Solution) <s> This paper investigates schemes to improve TCP performance in multipath forwarding networks. In multipath routing, packets to the same destination are sent to multiple next-hops in either packet-level or flow-level forwarding mode. Effective bandwidth is increased since we can utilize unused capacity of multiple paths to the destination. In packet-level multipath forwarding networks, TCP performance may not be enhanced due to frequent out-of-order segment arrivals at the receiver because of different delays among paths. To overcome this problem, we propose simple TCP modifications. At the sender, the fast retransmission threshold is adjusted taking the number of paths into consideration. At the receiver, the delayed acknowledgment scheme is modified such that an acknowledgment for an out-of-order segment arrival is delayed in the same way for the in-order one. The number of unnecessary retransmissions and congestion window reductions is diminished, which is verified by extensive simulations. In flow-level multipath forwarding networks, hashing is used at routers to select outgoing link of a packet. Here, we show by simulations that TCP performance is increased in proportion to the number of paths regardless of delay differences. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Lee-Park-Choi Algorithm (Sender-Side Solution) <s> In this paper, we propose a framework to study how to route packets efficiently in multipath communication networks. Two traffic congestion control techniques, namely, flow assignment and packet scheduling, have been investigated. The flow assignment mechanism defines an optimal splitting of data traffic on multiple disjoint paths. The resequencing delay and the usage of the resequencing buffer can be reduced significantly by properly scheduling the sending order of all packets, say, according to their expected arrival times at the destination. To illustrate our model, and without loss of generality, Gaussian distributed end-to-end path delays are used. Our analytical results show that the techniques are very effective in reducing the average end-to-end path delay, the average packet resequencing delay, and the average resequencing buffer occupancy for various path configurations. These promising results can form a basis for designing future adaptive multipath protocols. <s> BIB002
Lee et al. BIB001 proposed a sender-side solution to improve TCP performance for forward-path reordering over multiple paths. dupthresh is set to increase logarithmically with the number of paths used. Thus, a source has to receive a larger number of duplicate ACKs before a congestion response is triggered, when more paths are used concurrently to transmit a single TCP flow. However, when packet-level multipath routing is used for data transmission, the level of packet reordering may depend on the differences in path delays and how the packets belonging to a single flow are distributed to these paths BIB002 . Hence, there exists no direct correlation between dupthresh and the number of participating paths.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Blanton-Allman Algorithms <s> Previous research indicates that packet reordering is not a rare event on some Internet paths. Reordering can cause performance problems for TCP's fast retransmission algorithm, which uses the arrival of duplicate acknowledgments to detect segment loss. Duplicate acknowledgments can be caused by the loss of a segment or by the reordering of segments by the network. In this paper we illustrate the impact of reordering on TCP performance. In addition, we show the performance of a conservative approach to "undo" the congestion control state changes made in conjunction with spurious retransmissions. Finally, we propose several alternatives to dynamically make the fast retransmission algorithm more tolerant of the reordering observed in the network and assess these algorithms. <s> BIB001
Blanton and Allman BIB001 proposed three alternatives to dynamically adjust dupthresh. The first alternative, denoted as Blanton-Allman:INC, is to increase dupthresh by some constant every time a spurious fast retransmission is detected. The second alternative, denoted as BlantonAllman:AVG, is to increase dupthresh by taking the average of the current dupthresh and the number of duplicate ACKs required to disambiguate reordering from loss when a spurious fast retransmission is detected. The third alternative, denoted as Blanton-Allman:EWMA, is to assign dupthresh to an exponentially weighted moving average (EWMA) of the length of perceived reordering events. For all these algorithms, dupthresh is reset to three upon the expiration of the retransmission timer in order to reduce future costly retransmission timer expirations. The authors also extended the limited transmit algorithm , which allows a source to send a new segment upon the receipt of the first two duplicate ACKs, so that a new segment could be sent on every two duplicate ACKs received afterward. This helps to maintain ACK-clocking and avoids injecting traffic bursts when an ACK for a new segment arrives. Furthermore, they employed the approach proposed in by using DSACK information to detect forward-path reordering and state reconciliation. Their simulation results BIB001 showed that, when compared with the default dupthresh of three, the proposed techniques improved connection throughput and reduced the number of unnecessary retransmissions. However, their algorithms have three major shortcomings. First, the adjustment of dupthresh for some proposed algorithms is not adaptive enough to the dynamic behavior of the reordering events. For example, it takes 17 detected spurious fast retransmissions for dupthresh to grow from 3 to 20, when the first algorithm with the increment being set to one is used. Second, there is no adaptive mechanism to reduce dupthresh dynamically, except for the third algorithm. These algorithms thus fail to adapt a suitable dupthresh that strikes a balance between the cost of a spurious fast retransmission and that of a retransmission timeout expiration. They are also unable to search for an appropriate but reduced value of dupthresh when the extent of reordering decreases. Third, the reset of dupthresh to three upon the expiration of the retransmission timer destroys all historical information about the level of forward-path reordering in the networks. It takes another time period to allow dupthresh to increase to the desired value.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> RR-TCP <s> As a reliable, end-to-end transport protocol, the ARPA Transmission Control Protocol (TCP) uses positive acknowledgements and retransmission to guarantee delivery. TCP implementations are expected to measure and adapt to changing network propagation delays so that its retransmission behavior balances user throughput and network efficiency. However, TCP suffers from a problem we call retransmission ambiguity: when an acknowledgment arrives for a segment that has been retransmitted, there is no indication which transmission is being acknowledged. Many existing TCP implementations do not handle this problem correctly. This paper reviews the various approaches to retransmission and presents a novel and effective approach to the retransmission ambiguity problem. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> RR-TCP <s> This paper uses simulations to explore the benefits of adding selective acknowledgments (SACK) and selective repeat to TCP. We compare Tahoe and Reno TCP, the two most common reference implementations for TCP, with two modified versions of Reno TCP. The first version is New-Reno TCP, a modified version of TCP without SACK that avoids some of Reno TCP's performance problems when multiple packets are dropped from a window of data. The second version is SACK TCP, a conservative extension of Reno TCP modified to use the SACK option being proposed in the Internet Engineering Task Force (IETF). We describe the congestion control algorithms in our simulated implementation of SACK TCP and show that while selective acknowledgments are not required to solve Reno TCP's performance problems when multiple packets are dropped, the absence of selective acknowledgments does impose limits to TCP's ultimate performance. In particular, we show that without selective acknowledgments, TCP implementations are constrained to either retransmit at most one dropped packet per round-trip time, or to retransmit packets that might have already been successfully delivered. <s> BIB002 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> RR-TCP <s> Previous research indicates that packet reordering is not a rare event on some Internet paths. Reordering can cause performance problems for TCP's fast retransmission algorithm, which uses the arrival of duplicate acknowledgments to detect segment loss. Duplicate acknowledgments can be caused by the loss of a segment or by the reordering of segments by the network. In this paper we illustrate the impact of reordering on TCP performance. In addition, we show the performance of a conservative approach to "undo" the congestion control state changes made in conjunction with spurious retransmissions. Finally, we propose several alternatives to dynamically make the fast retransmission algorithm more tolerant of the reordering observed in the network and assess these algorithms. <s> BIB003 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> RR-TCP <s> TCP performs poorly on paths that reorder packets significantly, where it misinterprets out-of-order delivery as packet loss. The sender responds with a fast retransmit though no actual loss has occurred. These repeated false fast retransmits keep the sender's window small, and severely degrade the throughput it attains. Requiring nearly in-order delivery needlessly restricts and complicates Internet routing systems and routers. Such beneficial systems as multi-path routing and parallel packet switches are difficult to deploy in a way that preserves ordering. Toward a more reordering-tolerant Internet architecture, we present enhancements to TCP that improve the protocol's robustness to reordered and delayed packets. We extend the sender to detect and recover from false fast retransmits using DSACK information, and to avoid false fast retransmits proactively, by adaptively varying dupthresh. Our algorithm is the first that adaptively balances increasing dupthresh, to avoid false fast retransmits, and limiting the growth of dupthresh, to avoid unnecessary timeouts. Finally, we demonstrate that TCP's RTO estimator tolerates delayed packets poorly, and present enhancements to it that ensure it is sufficiently conservative, without using timestamps or additional TCP header hits. Our simulations show that these enhancements significantly improve TCP's performance over paths that reorder or delay packets. <s> BIB004
Zhang et al. devised the reordering-robust TCP (RR-TCP) BIB004 as an extension of the Blanton-Allman algorithms BIB003 , but they differ in three ways. First, RR-TCP uses a different mechanism to adjust dupthresh dynamically. The authors formulated a combined cost function for retransmission timeouts, spurious fast retransmissions, and limited transmit to adapt the false fast retransmit avoidance ratio (FA radio). The FA ratio, which represents the portion of reordering events to be avoided in order to minimize the cost, can then be used to find the corresponding dupthresh. Thus, this provides a mechanism to raise or reduce dupthresh dynamically, by changing the FA ratio based on the current network conditions. Second, the authors considered another extended version of the limited transmit algorithm . This extension permits a source to send up to one ACK-clocked additional congestion window's worth of data. Third, the authors suggested an idea to correct the sampling bias against long RTT samples for the RTT and RTO estimations. Instead of skipping the samples for retransmitted segments in the Karn's algorithm BIB001 , an RTT sample is taken for each retransmitted segment by taking it as the average of the RTTs for both the first and the second transmissions of that segment. The simulation results in BIB004 showed that RR-TCP could significantly improve TCP performance over reordering networks. When 1-2 percent of segments were randomly selected to experience a longer delay (according to a normal distribution), RR-TCP could improve the connection throughput by more than 50 percent and 150 percent when compared with the Blanton-Allman algorithms BIB003 (including the time-delayed fast retransmit algorithm) and SACK TCP BIB002 , respectively. However, RR-TCP needs to maintain a reordering histogram to store the reordering information. It is also required to scan and update the histogram for every reordered segment.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Leung-Ma Algorithm <s> Previous research indicates that packet reordering is not a rare event on some Internet paths. Reordering can cause performance problems for TCP's fast retransmission algorithm, which uses the arrival of duplicate acknowledgments to detect segment loss. Duplicate acknowledgments can be caused by the loss of a segment or by the reordering of segments by the network. In this paper we illustrate the impact of reordering on TCP performance. In addition, we show the performance of a conservative approach to "undo" the congestion control state changes made in conjunction with spurious retransmissions. Finally, we propose several alternatives to dynamically make the fast retransmission algorithm more tolerant of the reordering observed in the network and assess these algorithms. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Leung-Ma Algorithm <s> TCP performs poorly on paths that reorder packets significantly, where it misinterprets out-of-order delivery as packet loss. The sender responds with a fast retransmit though no actual loss has occurred. These repeated false fast retransmits keep the sender's window small, and severely degrade the throughput it attains. Requiring nearly in-order delivery needlessly restricts and complicates Internet routing systems and routers. Such beneficial systems as multi-path routing and parallel packet switches are difficult to deploy in a way that preserves ordering. Toward a more reordering-tolerant Internet architecture, we present enhancements to TCP that improve the protocol's robustness to reordered and delayed packets. We extend the sender to detect and recover from false fast retransmits using DSACK information, and to avoid false fast retransmits proactively, by adaptively varying dupthresh. Our algorithm is the first that adaptively balances increasing dupthresh, to avoid false fast retransmits, and limiting the growth of dupthresh, to avoid unnecessary timeouts. Finally, we demonstrate that TCP's RTO estimator tolerates delayed packets poorly, and present enhancements to it that ensure it is sufficiently conservative, without using timestamps or additional TCP header hits. Our simulations show that these enhancements significantly improve TCP's performance over paths that reorder or delay packets. <s> BIB002 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Leung-Ma Algorithm <s> In this paper, we propose a simple algorithm to adaptively adjust the value of dupthresh, the duplicate acknowledgement threshold that triggers the transmission control protocol (TCP) fast retransmission algorithm, to improve the TCP performance in a network environment with persistent packet reordering. Our algorithm uses an exponentially weighted moving average (EWMA) and the mean deviation of the lengths of the reordering events reported by a TCP receiver with the duplicate selective acknowledgement (DSACK) extension to estimate the value of dupthresh. We also apply an adaptive upper bound on dupthresh to avoid the retransmission timeout events. In addition, our algorithm includes a mechanism to exponentially reduce dupthresh when the retransmission timer expires. With these mechanisms, our algorithm is capable of converging to and staying at a near-optimal interval of dupthresh. The simulation results show that our algorithm improves the protocol performance significantly with minimal overheads, achieving a greater throughput and fewer false fast retransmissions. <s> BIB003
Leung and Ma BIB003 proposed to improve the TCP robustness to persistent packet reordering by extending the Blanton-Allman algorithms BIB001 . First, Leung and Ma suggested using an EWMA and the mean deviation of the lengths of the reordering events. By including the mean length deviation, dupthresh is selected to avoid triggering a certain portion of spurious fast retransmissions and prevent costly retransmission timer expirations. This shares the same design philosophy as RR-TCP BIB002 but incurs fewer computational and storage overheads. Second, an upper bound of dupthresh was introduced to avoid retransmission timeouts. To avoid the timer expiration for a lost segment, an ACK for the retransmitted segment must be received by a source before the timer fires. The maximum number of duplicate ACKs received before triggering a fast retransmission can then be estimated to satisfy the aforementioned criteria. Third, Leung and Ma also suggested a mechanism to exponentially reduce dupthresh for the retransmission timer expiration, since an occurrence of a retransmission timer could imply that dupthresh has been too large. The simulation results in BIB003 demonstrated that the Leung-Ma algorithm could improve the connection throughput by at least 35 percent and reduce the unnecessary fast retransmissions by 6 percent when compared with the Blanton-Allman algorithms (including the time-delayed fast retransmit algorithm). When compared with RR-TCP, the Leung-Ma algorithm achieved similar performance in terms of connection throughput and unnecessary fast retransmissions, but it takes much fewer computations and storage space.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Time-Delayed Fast Retransmit Algorithm <s> We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behaviors due to the different directions of the Internet paths, which often exhibit asymmetries. We characterize the prevalence of unusual network events such as out-of-order delivery and packet corruption; discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair"; investigate patterns of packet loss, finding that loss events are not well-modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Time-Delayed Fast Retransmit Algorithm <s> Previous research indicates that packet reordering is not a rare event on some Internet paths. Reordering can cause performance problems for TCP's fast retransmission algorithm, which uses the arrival of duplicate acknowledgments to detect segment loss. Duplicate acknowledgments can be caused by the loss of a segment or by the reordering of segments by the network. In this paper we illustrate the impact of reordering on TCP performance. In addition, we show the performance of a conservative approach to "undo" the congestion control state changes made in conjunction with spurious retransmissions. Finally, we propose several alternatives to dynamically make the fast retransmission algorithm more tolerant of the reordering observed in the network and assess these algorithms. <s> BIB002
Blanton and Allman developed the time-delayed fast retransmit algorithm BIB002 , denoted as Blanton-Allman:DEL, to postpone congestion response with the presence of forward-path reordering. Upon receiving three duplicate ACKs, a source waits for an additional time period before triggering a congestion response. It can be viewed as an extension of BIB001 . However, they differ as the Paxson algorithm is a receiver-based algorithm, but the time-delayed fast retransmit algorithm is a sender-based algorithm. When an ACK for the inferred loss segment arrives, the pending congestion response is cleared. The time period is increased by some constant each time a spurious retransmission is detected. Thus, this method is similar to the algorithms proposed in BIB002 to adjust dupthresh dynamically, and shares their associated merits and limitations.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> TCP-DCR <s> This paper uses simulations to explore the benefits of adding selective acknowledgments (SACK) and selective repeat to TCP. We compare Tahoe and Reno TCP, the two most common reference implementations for TCP, with two modified versions of Reno TCP. The first version is New-Reno TCP, a modified version of TCP without SACK that avoids some of Reno TCP's performance problems when multiple packets are dropped from a window of data. The second version is SACK TCP, a conservative extension of Reno TCP modified to use the SACK option being proposed in the Internet Engineering Task Force (IETF). We describe the congestion control algorithms in our simulated implementation of SACK TCP and show that while selective acknowledgments are not required to solve Reno TCP's performance problems when multiple packets are dropped, the absence of selective acknowledgments does impose limits to TCP's ultimate performance. In particular, we show that without selective acknowledgments, TCP implementations are constrained to either retransmit at most one dropped packet per round-trip time, or to retransmit packets that might have already been successfully delivered. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> TCP-DCR <s> We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behaviors due to the different directions of the Internet paths, which often exhibit asymmetries. We characterize the prevalence of unusual network events such as out-of-order delivery and packet corruption; discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair"; investigate patterns of packet loss, finding that loss events are not well-modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales. <s> BIB002 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> TCP-DCR <s> In this paper, we propose and evaluate TCP-DCR. TCP-DCR makes simple modifications to the TCP congestion control algorithm to make it more robust to non-congestion events. The key idea here is to delay the congestion response of TCP for a short interval of time τ, thereby creating room for local recovery mechanisms to handle any non-congestion events that may have occurred. If at the end of the delay t, the event is not handled, then it is treated as a congestion loss. We evaluate TCP-DCR through analysis and simulations. The evaluation is done for three scenarios — a wireless network with channel errors, a wired network with packet reordering and a network with zero non-congestion events. The simulation results show that significant performance improvements can be achieved by using TCP-DCR in the presence of non-congestion events with zero or marginal impact in the absence of non-congestion events. TCP-DCR remains fair to the native implementations of TCP that respond to congestion immediately after receiving three dupacks. TCP-DCR is a simple, effective scheme providing a unified solution to several problems with minimal implementation overhead. <s> BIB003
Bhandarkar and Reddy devised the delayed congestion response TCP (TCP-DCR) BIB003 to meliorate the TCP robustness to noncongestion events. It advances the timedelayed fast retransmit algorithm BIB002 by delaying a congestion response for a time interval after the first duplicate ACK is received. The authors suggested to set this interval to one RTT so as to have ample time to deal with forward-path reordering due to link-layer retransmissions for loss recovery. To maintain ACK-clocking, TCP-DCR sends one new data segment upon the receipt of each duplicate ACK. The simulation results in BIB003 demonstrated that TCP-DCR performed significantly better than SACK TCP BIB001 . TCP-DCR achieved 10 times more in connection throughput than SACK TCP when more than 5 percent of packets are delayed according to a normal distribution with negligible congestion loss. However, the chosen bottleneck link delay is at least equal to the highest possible reordered delay for their experiments. This implies that a reordering event is unlikely to last longer than the interval for delaying the congestion response. The suggested interval may not be a proper choice for multipath routing since packets are reordered mainly based on the differences in path delay, while the estimated RTT is a weighted average of RTT based on the traffic distribution to the participating paths. Further study is needed to find a proper choice of the delayed interval for congestion response with the presence of packet reordering.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Experimental Setup <s> We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behaviors due to the different directions of the Internet paths, which often exhibit asymmetries. We characterize the prevalence of unusual network events such as out-of-order delivery and packet corruption; discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair"; investigate patterns of packet loss, finding that loss events are not well-modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Experimental Setup <s> There is an increasing number of Internet applications that attempt to optimize their network communication by considering the network distance across which data is transferred. Such applications range from replication management to mobile agent applications. One major problem of these applications is to efficiently acquire distance information for large computer networks. This paper presents an approach to creating a global view on the Internet, a so-called network distance map, which realizes a hierarchical decomposition of the network into regions and which allows us to estimate the network distance between any two hosts. This view is not only a single snapshot but is dynamically adapted to the continuously changing network conditions. The main idea is to use a certain set of hosts for performing distance measurements and to use the so-gained information for estimating the distance between arbitrary hosts. A hierarchical clustering provides the notion of regions and allows us to coordinate the measurements in such a way that the resulting network load is minimized. An experimental evaluation on the basis of 119 globally distributed measurement servers shows that already a small number of measurement servers allows us to construct fairly accurate distance maps at low cost. <s> BIB002 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Experimental Setup <s> This paper investigates schemes to improve TCP performance in multipath forwarding networks. In multipath routing, packets to the same destination are sent to multiple next-hops in either packet-level or flow-level forwarding mode. Effective bandwidth is increased since we can utilize unused capacity of multiple paths to the destination. In packet-level multipath forwarding networks, TCP performance may not be enhanced due to frequent out-of-order segment arrivals at the receiver because of different delays among paths. To overcome this problem, we propose simple TCP modifications. At the sender, the fast retransmission threshold is adjusted taking the number of paths into consideration. At the receiver, the delayed acknowledgment scheme is modified such that an acknowledgment for an out-of-order segment arrival is delayed in the same way for the in-order one. The number of unnecessary retransmissions and congestion window reductions is diminished, which is verified by extensive simulations. In flow-level multipath forwarding networks, hashing is used at routers to select outgoing link of a packet. Here, we show by simulations that TCP performance is increased in proportion to the number of paths regardless of delay differences. <s> BIB003
The network topology used for the study is shown in Fig. 6 . It involves two end-systems (S and D) and two routers (R1 and R2). The path between R1 and R2 models the underlying network path connecting R1 and R2. A transmission path usually consists of multiple hops. It has been shown BIB002 that the average hop-count for an Internet path is 16.2. The central limit theorem suggests that the end-to-end delay over a multihop path, which is the sum of a large number of independent hop-delays, is approximately normally distributed. To simulate packet reordering (such as those caused by route fluttering), we repeatedly change the R1 À R2 path delay according to a truncated normal distribution such that every path delay sample is at least 50 ms. The mean and standard deviation of the path delay are ð200 þ 50Þ ms and 200 3 ms, respectively, where is the path delay factor ranging from 0 to 2 in our study. A larger will induce more variation in the path delay, thereby increasing the degree of packet reordering. The time interval between two successive changes on the path delay, denoted as the interswitching time, dictates the frequency of the reordering events. In our simulation study, the interswitching time, , is exponentially distributed with mean 50 ms or 250 ms. The smaller the interswitching time is, the more frequently reordering events are produced, and vice versa. The simulation parameters are summarized in Table 1 . Our simulation study has been performed using the Network Simulator (ns) Version 2.29 . Except for the Lee-Park-Choi algorithms BIB003 and the Paxson algorithm BIB001 , the program codes of all algorithms under study are ported to the same version of the simulator for fair comparison. The Lee-ParkChoi algorithms are engineered to packet-level multipath routing, whereas the Paxson algorithm is merely an outline of thought. Hence, they are not considered for this simulation study. The ported program codes and simulation scripts can be obtained at http://www.eee.hku.hk/~kcleung/re search/TCP_reordering_survey.html. A single, long-lived TCP flow from S to D is simulated for 1,100 seconds. We take the goodput of a flow, which represents the rate of useful data (that can be acknowledged cumulatively) delivered to the destination successfully, as the performance metric of the surveyed algorithms. For each simulation run, the statistics for computing the performance metric are collected after the trial period of the first 100 simulated seconds. A total of 30 runs have been done to compute an average value of the performance metric, and a 95 percent confidence interval for each average value of the metric is also calculated. The quality of an algorithm depends on how well the goodput of the flow can be sustained with various degrees of packet reordering.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Simulation Results <s> This paper uses simulations to explore the benefits of adding selective acknowledgments (SACK) and selective repeat to TCP. We compare Tahoe and Reno TCP, the two most common reference implementations for TCP, with two modified versions of Reno TCP. The first version is New-Reno TCP, a modified version of TCP without SACK that avoids some of Reno TCP's performance problems when multiple packets are dropped from a window of data. The second version is SACK TCP, a conservative extension of Reno TCP modified to use the SACK option being proposed in the Internet Engineering Task Force (IETF). We describe the congestion control algorithms in our simulated implementation of SACK TCP and show that while selective acknowledgments are not required to solve Reno TCP's performance problems when multiple packets are dropped, the absence of selective acknowledgments does impose limits to TCP's ultimate performance. In particular, we show that without selective acknowledgments, TCP implementations are constrained to either retransmit at most one dropped packet per round-trip time, or to retransmit packets that might have already been successfully delivered. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Simulation Results <s> We propose an enhancement to TCP's error recovery scheme, which we call the Eifel algorithm. It eliminates the retransmission ambiguity, thereby solving the problems caused by spurious timeouts and spurious fast retransmits. It can be incrementally deployed as it is backwards compatible and does not change TCP's congestion control semantics. In environments where spurious retransmissions occur frequently, the algorithm can improve the end-to-end throughput by several tens of percent. An exact quantification is, however, highly dependent on the path characteristics over time. The Eifel algorithm finally makes TCP truly wireless-capable without the need for proxies between the end points. Another key novelty is that the Eifel algorithm provides for the implementation of a more optimistic retransmission timer because it reduces the penalty of a spurious timeout to a single (in the common case) spurious retransmission. <s> BIB002 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Simulation Results <s> In a Mobile Ad Hoc Network (MANET), temporary link failures and route changes happen frequently. With the assumption that all packet losses are due to congestion, TCP performs poorly in such environment. While there has been some research on improving TCP performance over MANET, most of them require feedback from the network or the lower layer. In this research, we explore a new approach to improve TCP performance by detecting and responding to out-of-order packet delivery events, which are the results of frequent route changes. In our simulation study, this approach had achieved on average 50% performance improvement, without requiring feedback from the network or the lower layer. <s> BIB003 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Simulation Results <s> Previous research indicates that packet reordering is not a rare event on some Internet paths. Reordering can cause performance problems for TCP's fast retransmission algorithm, which uses the arrival of duplicate acknowledgments to detect segment loss. Duplicate acknowledgments can be caused by the loss of a segment or by the reordering of segments by the network. In this paper we illustrate the impact of reordering on TCP performance. In addition, we show the performance of a conservative approach to "undo" the congestion control state changes made in conjunction with spurious retransmissions. Finally, we propose several alternatives to dynamically make the fast retransmission algorithm more tolerant of the reordering observed in the network and assess these algorithms. <s> BIB004 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Simulation Results <s> TCP performs poorly on paths that reorder packets significantly, where it misinterprets out-of-order delivery as packet loss. The sender responds with a fast retransmit though no actual loss has occurred. These repeated false fast retransmits keep the sender's window small, and severely degrade the throughput it attains. Requiring nearly in-order delivery needlessly restricts and complicates Internet routing systems and routers. Such beneficial systems as multi-path routing and parallel packet switches are difficult to deploy in a way that preserves ordering. Toward a more reordering-tolerant Internet architecture, we present enhancements to TCP that improve the protocol's robustness to reordered and delayed packets. We extend the sender to detect and recover from false fast retransmits using DSACK information, and to avoid false fast retransmits proactively, by adaptively varying dupthresh. Our algorithm is the first that adaptively balances increasing dupthresh, to avoid false fast retransmits, and limiting the growth of dupthresh, to avoid unnecessary timeouts. Finally, we demonstrate that TCP's RTO estimator tolerates delayed packets poorly, and present enhancements to it that ensure it is sufficiently conservative, without using timestamps or additional TCP header hits. Our simulations show that these enhancements significantly improve TCP's performance over paths that reorder or delay packets. <s> BIB005 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Simulation Results <s> In this paper, we propose and evaluate TCP-DCR. TCP-DCR makes simple modifications to the TCP congestion control algorithm to make it more robust to non-congestion events. The key idea here is to delay the congestion response of TCP for a short interval of time τ, thereby creating room for local recovery mechanisms to handle any non-congestion events that may have occurred. If at the end of the delay t, the event is not handled, then it is treated as a congestion loss. We evaluate TCP-DCR through analysis and simulations. The evaluation is done for three scenarios — a wireless network with channel errors, a wired network with packet reordering and a network with zero non-congestion events. The simulation results show that significant performance improvements can be achieved by using TCP-DCR in the presence of non-congestion events with zero or marginal impact in the absence of non-congestion events. TCP-DCR remains fair to the native implementations of TCP that respond to congestion immediately after receiving three dupacks. TCP-DCR is a simple, effective scheme providing a unified solution to several problems with minimal implementation overhead. <s> BIB006 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Simulation Results <s> Numerous studies have shown that packet reordering is common, especially in networks where there is high degree of parallelism and different link speeds. Reordering of packets decreases the TCP performance of a network, mainly because it leads to overestimation of the congestion of the network. We consider wired networks and analyze the performance of such networks when reordering of packets occurs. We propose a proactive solution that could significantly improve the performance of the network when reordering of packets occurs. We report results of our simulation experiments, which support this claim. Our solution is based on enabling the senders to distinguished between dropped packets and reordered packets. <s> BIB007 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Simulation Results <s> In this paper, we propose a simple algorithm to adaptively adjust the value of dupthresh, the duplicate acknowledgement threshold that triggers the transmission control protocol (TCP) fast retransmission algorithm, to improve the TCP performance in a network environment with persistent packet reordering. Our algorithm uses an exponentially weighted moving average (EWMA) and the mean deviation of the lengths of the reordering events reported by a TCP receiver with the duplicate selective acknowledgement (DSACK) extension to estimate the value of dupthresh. We also apply an adaptive upper bound on dupthresh to avoid the retransmission timeout events. In addition, our algorithm includes a mechanism to exponentially reduce dupthresh when the retransmission timer expires. With these mechanisms, our algorithm is capable of converging to and staying at a near-optimal interval of dupthresh. The simulation results show that our algorithm improves the protocol performance significantly with minimal overheads, achieving a greater throughput and fewer false fast retransmissions. <s> BIB008 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Simulation Results <s> Most standard implementations of TCP perform poorly when packets are reordered. In this paper, we propose a new version of TCP that maintains high throughput when reordering occurs and yet, when packet reordering does not occur, is friendly to other versions of TCP. The proposed TCP variant, or TCP-PR, does not rely on duplicate acknowledgments to detect a packet loss. Instead, timers are maintained to keep track of how long ago a packet was transmitted. In case the corresponding acknowledgment has not yet arrived and the elapsed time since the packet was sent is larger than a given threshold, the packet is assumed lost. Because TCP-PR does not rely on duplicate acknowledgments, packet reordering (including out-or-order acknowledgments) has no effect on TCP-PR's performance. Through extensive simulations, we show that TCP-PR performs consistently better than existing mechanisms that try to make TCP more robust to packet reordering. In the case that packets are not reordered, we verify that TCP-PR maintains the same throughput as typical implementations of TCP (specifically, TCP-SACK) and shares network resources fairly. Furthermore, TCP-PR only requires changes to the TCP sender side making it easier to deploy. <s> BIB009
The results are provided in two sets. The first set examines the effect of forward-path reordering on the performance of TCP implemented with various surveyed schemes. The reverse path from R2 to R1 is an in-order channel with a constant delay of 50 ms, i.e., ¼ 0. Fig. 7 shows the goodput of the flow when the path delay factor, , varies between 0 and 2. Except for TCP-PR BIB009 , the goodput generally drops as increases from 0 to 2. A larger value of implies a larger mean and standard deviation of the path delay. This results in a higher degree of packet reordering. Hence, it is more likely to trigger spurious fast retransmissions. In addition, the goodput ordinarily soars when rises from 50 ms to 250 ms, because the reordering events occur less often. The algorithms for threshold adjustment and those for the temporal approach generally perform better than those for state reconciliation. The latter class of algorithms is only able to recover the congestion state just before a congestion response is taken. Hence, these algorithms do not alleviate performance problems due to persistent and substantial segment reordering. By suspending the congestion response for a certain time period instead of initiating state recovery upon detecting a spurious fast retransmission, TCP-DOOR BIB003 outperforms DSACK TCP , the Eifel algorithm BIB002 , and SACK TCP BIB001 by at least 89.4 percent in connection goodput when is two. The algorithms for threshold adjustment and those for the temporal approach can help TCP reduce spurious retransmissions due to segment reordering, thereby maintaining a larger congestion window and sustaining a higher connection goodput. The Leung-Ma algorithm BIB008 , RR-TCP BIB005 , and TCP-DCR BIB006 give similar performance. They outperform the Blanton-Allman algorithms BIB004 and RN-TCP BIB007 when is larger than a certain value, say, one, since they provide effective mechanisms to either dynamically adapt to the reordering conditions in the network or wait sufficiently long to avoid triggering fast retransmissions unnecessarily. TCP-PR sustains a good connection goodput at various levels of packet reordering, since its RTT and RTO estimators are very effective in shielding the effect of packet reordering. The second set investigates the effect of reverse-path reordering on TCP performance. As in the first set, the forward path from R1 to R2 is an in-order channel with a constant delay of 50 ms so that no packet reordering is possible in the forward path. Fig. 8 exhibits the goodput of the flow when the path delay factor, , varies between 0 and 2. Except for TCP-PR, the connection goodput falls as increases from 0 to 2. All algorithms except TCP-PR perform more or less the same. When is large, TCP-PR can still maintain a high connection goodput since its RTT and RTO estimators are highly effective in covering any adverse effect due to packet reordering. However, other surveyed algorithms are not very effective in maintaining the connection goodput with the presence of reverse-path reordering, because reordered ACKs can lead to the loss of ACK-clocking and burst injection.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Integrated Solution for All Types of Noncongestion Loss <s> Optimizing TCP (transport layer) for mobility has been researched extensively. We present a brief summary of existing results which indicates that most schemes require intermediaries (such as base stations) to monitor the TCP traffic and actively participate in flow control in order to enhance performance. Although these methods simulate end-to-end semantics, they do not comprise true end-to-end signaling. As a result, these techniques are not applicable when the IP payload is encrypted. For instance IPSEC, which is expected to be standard under IPv6, encrypts the entire IP payload making it impossible for intermediaries to monitor TCP traffic unless those entities are part of the security association. In addition, these schemes require changes (in the TCP/IP code) at intermediate nodes making it difficult for the mobile clients to inter-operate with the existing infrastructure. In this paper we explore the "freeze-TCP" mechanism which is a true end-to-end scheme and does not require the involvement of any intermediaries (such as base stations) for flow control. Furthermore, this scheme does not require any changes on the "sender side" or intermediate routers; changes in TCP code are restricted to the mobile client side, making it possible to fully inter-operate with the existing infrastructure. We then outline a method which integrates the best attributes of freeze-TCP and some existing solutions. Performance results highlight the importance of pro-active action/signaling by the mobile-host. The data indicate that in most cases, simply reacting to disconnections tends to yield lower performance than pro-active mechanisms such as freeze-TCP. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Integrated Solution for All Types of Noncongestion Loss <s> TCP Westwood (TCPW) is a sender-side modification of the TCP congestion window algorithm that improves upon the performance of TCP Reno in wired as well as wireless networks. The improvement is most significant in wireless networks with lossy links. In fact, TCPW performance is not very sensitive to random errors, while TCP Reno is equally sensitive to random loss and congestion loss and cannot discriminate between them. Hence, the tendency of TCP Reno to overreact to errors. An important distinguishing feature of TCP Westwood with respect to previous wireless TCP "extensions" is that it does not require inspection and/or interception of TCP packets at intermediate (proxy) nodes. Rather, TCPW fully complies with the end-to-end TCP design principle. The key innovative idea is to continuously measure at the TCP sender side the bandwidth used by the connection via monitoring the rate of returning ACKs. The estimate is then used to compute congestion window and slow start threshold after a congestion episode, that is, after three duplicate acknowledgments or after a timeout. The rationale of this strategy is simple: in contrast with TCP Reno which "blindly" halves the congestion window after three duplicate ACKs, TCP Westwood attempts to select a slow start threshold and a congestion window which are consistent with the effective bandwidth used at the time congestion is experienced. We call this mechanism faster recovery. The proposed mechanism is particularly effective over wireless links where sporadic losses due to radio channel problems are often misinterpreted as a symptom of congestion by current TCP schemes and thus lead to an unnecessary window reduction. Experimental studies reveal improvements in throughput performance, as well as in fairness. In addition, friendliness with TCP Reno was observed in a set of experiments showing that TCP Reno connections are not starved by TCPW connections. Most importantly, TCPW is extremely effective in mixed wired and wireless networks where throughput improvements of up to 550% are observed. Finally, TCPW performs almost as well as localized link layer approaches such as the popular Snoop scheme, without incurring the overhead of a specialized link layer protocol. <s> BIB002 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Integrated Solution for All Types of Noncongestion Loss <s> In this paper, we propose and evaluate TCP-DCR. TCP-DCR makes simple modifications to the TCP congestion control algorithm to make it more robust to non-congestion events. The key idea here is to delay the congestion response of TCP for a short interval of time τ, thereby creating room for local recovery mechanisms to handle any non-congestion events that may have occurred. If at the end of the delay t, the event is not handled, then it is treated as a congestion loss. We evaluate TCP-DCR through analysis and simulations. The evaluation is done for three scenarios — a wireless network with channel errors, a wired network with packet reordering and a network with zero non-congestion events. The simulation results show that significant performance improvements can be achieved by using TCP-DCR in the presence of non-congestion events with zero or marginal impact in the absence of non-congestion events. TCP-DCR remains fair to the native implementations of TCP that respond to congestion immediately after receiving three dupacks. TCP-DCR is a simple, effective scheme providing a unified solution to several problems with minimal implementation overhead. <s> BIB003 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Integrated Solution for All Types of Noncongestion Loss <s> Wireless access networks in the form of wireless local area networks, home networks, and cellular networks are becoming an integral part of the Internet. Unlike wired networks, random packet loss due to bit errors is not negligible in wireless networks, and this causes significant performance degradation of transmission control protocol (TCP). We propose and study a novel end-to-end congestion control mechanism called TCP Veno that is simple and effective for dealing with random packet loss. A key ingredient of Veno is that it monitors the network congestion level and uses that information to decide whether packet losses are likely to be due to congestion or random bit errors. Specifically: (1) it refines the multiplicative decrease algorithm of TCP Reno-the most widely deployed TCP version in practice-by adjusting the slow-start threshold according to the perceived network congestion level rather than a fixed drop factor and (2) it refines the linear increase algorithm so that the connection can stay longer in an operating region in which the network bandwidth is fully utilized. Based on extensive network testbed experiments and live Internet measurements, we show that Veno can achieve significant throughput improvements without adversely affecting other concurrent TCP connections, including other concurrent Reno connections. In typical wireless access networks with 1% random packet loss rate, throughput improvement of up to 80% can be demonstrated. A salient feature of Veno is that it modifies only the sender-side protocol of Reno without changing the receiver-side protocol stack. <s> BIB004 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Integrated Solution for All Types of Noncongestion Loss <s> Currently, a TCP sender considers all losses as congestion signals and reacts to them by throttling its sending rate. With Internet becoming more heterogeneous with more and more wireless error-prone links, a TCP connection may unduly throttle its sending rate and experience poor performance over paths experiencing random losses unrelated to congestion. The problem of distinguishing congestion losses from random losses is particularly hard when congestion is light: congestion losses themselves appear to be random. The key idea is to "de-randomize" congestion losses. This paper proposes a simple biased queue management scheme that "de-randomizes" congestion losses and enables a TCP receiver to diagnose accurately the cause of a loss and inform the TCP sender to react appropriately. Bounds on the accuracy of distinguishing wireless losses and congestion losses are analytically established and validated through simulations. Congestion losses are identified with an accuracy higher than 95% while wireless losses are identified with an accuracy higher than 75%. A closed form is derived for the achievable improvement by TCP endowed with a discriminator with a given accuracy. Simulations confirm this closed form. TCP-Casablanca, a TCP-Newreno endowed with the proposed discriminator at the receiver, yields through simulations an improvement of more than 100% on paths with low levels of congestion and about 1% random wireless packet loss rates. TCP-Ifrane, a sender-based TCP-Casablanca yields encouraging performance improvement. <s> BIB005
Packet reordering is merely one type of noncongestion loss TCP has to deal with. When a TCP connection is established over some wireless networks, it has to deal with other types of noncongestion packet loss, including the transmission loss over wireless links and disconnection loss due to host or network mobility. There have been some research in enhancing TCP for such noncongestion wireless loss BIB003 , BIB001 , , BIB002 , BIB004 , BIB005 . TCP-DCR BIB003 is so far the only work that deals with performance problems due to both packet reordering and reliable link-layer retransmissions. However, all other proposed techniques for dealing with noncongestion wireless loss are mainly based on the standard TCP protocol and do not generally take packet reordering into account. This means that it might not be possible to have an effective solution for dealing with both noncongestion wireless loss and packet reordering by simply bundling the proposed techniques separately designed for each. Hence, further study is needed to devise an integrated solution for TCP that can solve all types of noncongestion loss for wired/wireless networks with packet reordering.
An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Quantitative Assessment on Causes of Packet Reordering <s> We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behaviors due to the different directions of the Internet paths, which often exhibit asymmetries. We characterize the prevalence of unusual network events such as out-of-order delivery and packet corruption; discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair"; investigate patterns of packet loss, finding that loss events are not well-modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales. <s> BIB001 </s> An Overview of Packet Reordering in Transmission Control Protocol (TCP): Problems, Solutions, and Challenges <s> Quantitative Assessment on Causes of Packet Reordering <s> It is a widely held belief that packet reordering in the Internet is a pathological behavior, or more precisely, that it is an uncommon behavior caused by incorrect or malfunctioning network components. Some studies of Internet traffic have reported seeing occasional packet reordering events and ascribed these events to "route fluttering", router "pauses" or simply to broken equipment. We have found, however, that parallelism in Internet components and links is causing packet reordering under normal operation and that the incidence of packet reordering appears to be substantially higher than previously reported. More importantly, we observe that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected. Perhaps the most disturbing observation about TCP's behavior is that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP. <s> BIB002
Recent studies BIB002 , BIB001 have discussed the occurrence and causes of packet reordering. Substantial quantitative results have been provided to justify that packet reordering occurs normally in packet-switching networks. However, the discussion about the causes of packet reordering was somewhat qualitative, without any empirical results to infer which causes are more likely to happen and their impact to packet reordering. Thus, a quantitative assessment on the causes of packet reordering warren to be investigated further.
A Survey on Tensor Techniques and Applications in Machine Learning <s> I. INTRODUCTION <s> This paper aims to take general tensors as inputs for supervised learning. A supervised tensor learning (STL) framework is established for convex optimization based learning techniques such as support vector machines (SVM) and minimax probability machines (MPM). Within the STL framework, many conventional learning machines can be generalized to take n/sup th/-order tensors as inputs. We also study the applications of tensors to learning machine design and feature extraction by linear discriminant analysis (LDA). Our method for tensor based feature extraction is named the tenor rank-one discriminant analysis (TR1DA). These generalized algorithms have several advantages: 1) reduce the curse of dimension problem in machine learning and data mining; 2) avoid the failure to converge; and 3) achieve better separation between the different categories of samples. As an example, we generalize MPM to its STL version, which is named the tensor MPM (TMPM). TMPM learns a series of tensor projections iteratively. It is then evaluated against the original MPM. Our experiments on a binary classification problem show that TMPM significantly outperforms the original MPM. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> I. INTRODUCTION <s> This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> I. INTRODUCTION <s> There has been growing interest in developing more effective learning machines for tensor classification. At present, most of the existing learning machines, such as support tensor machine (STM), involve nonconvex optimization problems and need to resort to iterative techniques. Obviously, it is very time-consuming and may suffer from local minima. In order to overcome these two shortcomings, in this paper, we present a novel linear support higher-order tensor machine (SHTM) which integrates the merits of linear C-support vector machine (C-SVM) and tensor rank-one decomposition. Theoretically, SHTM is an extension of the linear C-SVM to tensor patterns. When the input patterns are vectors, SHTM degenerates into the standard C-SVM. A set of experiments is conducted on nine second-order face recognition datasets and three third-order gait recognition datasets to illustrate the performance of the proposed SHTM. The statistic test shows that compared with STM and C-SVM with the RBF kernel, SHTM provides significant performance gain in terms of test accuracy and training speed, especially in the case of higher-order tensors. <s> BIB003 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> I. INTRODUCTION <s> A multiway Fisher Discriminant Analysis (MFDA) formulation is presented in this paper. The core of MFDA relies on the structural constraint imposed to the discriminant vectors in order to account for the multiway structure of the data. This results in a more parsimonious model than that of Fisher Discriminant Analysis (FDA) performed on the unfolded data table. Moreover, computational and overfitting issues that occur with high dimensional data are better controlled. MFDA is applied to predict the long term recovery of patients after traumatic brain injury from multi-modal brain Magnetic Resonance Imaging. As compared to FDA, MFDA clearly tracks down the discrimination areas within the white matter region of the brain and provides a ranking of the contribution of the neuroimaging modalities. Based on cross validation, the accuracy of MFDA is equal to 77 % against 75 % for FDA. <s> BIB004 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> I. INTRODUCTION <s> Modern applications in engineering and data science are increasinglybased on multidimensional data of exceedingly high volume, variety,and structural richness. However, standard machine learning algorithmstypically scale exponentially with data volume and complexityof cross-modal couplings - the so called curse of dimensionality -which is prohibitive to the analysis of large-scale, multi-modal andmulti-relational datasets. Given that such data are often efficientlyrepresented as multiway arrays or tensors, it is therefore timely andvaluable for the multidisciplinary machine learning and data analyticcommunities to review low-rank tensor decompositions and tensor networksas emerging tools for dimensionality reduction and large scaleoptimization problems. Our particular emphasis is on elucidating that,by virtue of the underlying low-rank approximations, tensor networkshave the ability to alleviate the curse of dimensionality in a numberof applied areas. In Part 1 of this monograph we provide innovativesolutions to low-rank tensor network decompositions and easy to interpretgraphical representations of the mathematical operations ontensor networks. Such a conceptual insight allows for seamless migrationof ideas from the flat-view matrices to tensor network operationsand vice versa, and provides a platform for further developments, practicalapplications, and non-Euclidean extensions. It also permits theintroduction of various tensor network operations without an explicitnotion of mathematical expressions, which may be beneficial for manyresearch communities that do not directly rely on multilinear algebra.Our focus is on the Tucker and tensor train TT decompositions andtheir extensions, and on demonstrating the ability of tensor networksto provide linearly or even super-linearly e.g., logarithmically scalablesolutions, as illustrated in detail in Part 2 of this monograph. <s> BIB005 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> I. INTRODUCTION <s> In recent years, a class of dictionaries have been proposed for multidimensional (tensor) data representation that exploit the structure of tensor data by imposing a Kronecker structure on the dictionary underlying the data. In this work, a novel algorithm called “STARK” is provided to learn Kronecker structured dictionaries that can represent tensors of any order. By establishing that the Kronecker product of any number of matrices can be rearranged to form a rank-1 tensor, we show that Kronecker structure can be enforced on the dictionary by solving a rank-1 tensor recovery problem. Because rank-1 tensor recovery is a challenging nonconvex problem, we resort to solving a convex relaxation of this problem. Empirical experiments on synthetic and real data show promising results for our proposed algorithm. <s> BIB006 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> I. INTRODUCTION <s> Functional magnetic resonance imaging (fMRI) has increasingly come to dominate brain mapping research, as it provides a dynamic view of brain matter. Feature selection or extraction methods play an important role in the successful application of machine learning techniques to classifying fMRI data by appropriately reducing the dimensionality of the data. While whole-brain fMRI data contains large numbers of voxels, the curse of dimensionality problem may limit the feature selection/extraction and classification performance of traditional methods. In this paper, we propose a novel framework based on a tensor neural network (TensorNet) to extract the essential and discriminative features from the whole-brain fMRI data. The tensor train model was employed to construct a simple and shallow neural network and compress a large number of network weight parameters. The proposed framework can avoid the curse of dimensionality problem, and allow us to extract effective patterns from the whole-brain fMRI data. Furthermore, it reveals a new perspective for analyzing complex fMRI data with a large numbers of voxels, through compressing the number of parameters in a neural network. Experimental results confirmed that our proposed classification framework based on TensorNet outperforms traditional methods based on an SVM classifier for multi-class fMRI data. <s> BIB007 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> I. INTRODUCTION <s> The multispectral remote sensing image (MS-RSI) is blurred existing multispectral camera due to various hardware limitations. In this paper, we propose a novel structural compact core tensor dictionary learning (SCCTDL) model for MS-RSI deblurring. First, the multispectral patch is modeled by three-order tensor and high-order singular value decomposition is applied to the tensor. Then the task of MS-RSI deblurring is formulated as a minimum sparse core tensor estimation problem. To improve the accuracy of core tensor coding, the core tensor estimation based on the structural compact principle is introduced into the SCCTDL model to exploit abundant structural similarity in image. Experimental results suggest that our method outperforms several existing MS-RSI deblurring methods in both subjective image quality and visual perception. <s> BIB008 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> I. INTRODUCTION <s> In the era of data science, a huge amount of data has emerged in the form of tensors. In many applications, the collected tensor data are incomplete with missing entries, which affects the analysis process. In this paper, we investigate a new method for tensor completion, in which a low-rank tensor approximation is used to exploit the global structure of data, and sparse coding is used for elucidating the local patterns of data. Regarding the characterization of low-rank structures, a weighted nuclear norm for the tensor is introduced. Meanwhile, an orthogonal dictionary learning process is incorporated into sparse coding for more effective discovery of the local details of data. By simultaneously using the global patterns and local cues, the proposed method can effectively and efficiently recover the lost information of incomplete tensor data. The capability of the proposed method is demonstrated with several experiments on recovering MRI data and visual data, and the experimental results have shown the excellent performance of the proposed method in comparison with recent related methods. <s> BIB009
''Tensor'' was first introduced by William Ron Hamilton in 1846 and later became known to scientists through the publication of Levi-Civita's book The Absolute Differential Calculus . Because of its structured representation of data format and ability to reduce the complexity of multidimensional arrays, tensor has been gradually applied in various fields, such as Dictionary Learning (Ghassemi et al.) BIB006 , Magnetic Resonance Imaging(MRI) (Xu et al.) BIB007 , Spectral data classification (Makantasis et al.) , and Image deblurring (Geng et al.) BIB008 . When traditional vector value data is extended to tensor value data, traditional vector value based algorithms will no longer work. Thereupon, some scientists extend the traditional vector-based machine learning algorithms to ten-The associate editor coordinating the review of this manuscript and approving it for publication was Massimo Cafaro . sors, such as Support tensor machine(STM) (Tao et al. BIB001 ; Biswas and Milanfar ; Hao et al. BIB003 ), tensor fisher discriminant analysis (Lechuga) BIB004 , tensor regression (Hoa et al.) , tensor completion (Du et al.) BIB009 , and so on. Recently, a series of new algorithms based on tensor have been widely used in biomedicine and image processing. Compared with traditional vector-based algorithms, tensorbased algorithms can achieve lower computational complexity and better accuracy. Through these tensor-based algorithms, high-dimensional problems can be solved effectively, and accuracy can be improved without destroying the data structure. The key references for this survey are (Cichocki et al.) BIB005 and (Kolda and Bader) BIB002 . The main purpose of this survey is to introduce basic machine learning applications related to tensor decomposition and tensor network model. Similar to matrix decomposition, tensor decomposition is used to decompose complex high-dimensional tensor into the form of the sum of products of factor tensor or factor vector. Tensor network decomposes the high-dimensional tensor into sparse factor matrices and low-order core tensor, which we call factors or blocks. In this way, we set the compression (that is, distributed) representation of large-size data, enhancing the advantage of interpretation and calculation. Tensor decomposition is regarded as a sub-tensor network in this survey. That is to say, the decomposition of tensor can be used in the same way as the tensor network. We can divide the data into related and irrelevant parts by using tensor decomposition. High-dimensional big data can be compressed several times without breaking data correlation by using tensor decomposition (tensor network). Moreover, tensor decomposition can be used to reduce unknown parameters, and then the exact solution can be obtained by alternate iterative algorithms. We provide a general block diagram of the survey (see figure 1 ). The survey consists of two parts. In part one, we first give the basic definition and notations of tensor in Chapter A. Then we introduce the basic operation of tensor, and the block diagram of the network structure of tensor in Chapter B. Next, we begin to describe tensor decomposition, including several famous decompositions such as the CP (regularization) decomposition, the Tucker decomposition, the Tensor train decomposition and Higherorder singular value decomposition (also known as higherorder tensor decomposition) in Chapter C. In Chapter D, we give a detailed description of tensor train decomposition and the related algorithms. In Chapter E, i.e., the last section of the first part, we summarize the advantages and disadvantages of these decompositions and applications. In part two, we mainly describe tensor application algorithms in machine learning and deep learning. In Chapter A, we introduce the application of structured tensor in data preprocessing including tensor completion and tensor dictionary learning. In Chapter B of this part, we introduce some applications of tensor in classification, including algorithm innovation and data innovation. Then, we illustrate the application of tensor in regression, including tensor regression and multivariate tensor regression, in Chapter C. In the last of part two, we explain the background of the tensor network and discuss its advantages, shortcomings, opportunities and challenges in detail.
A Survey on Tensor Techniques and Applications in Machine Learning <s> 13) THE TENSOR TRACE <s> We show that general string-net condensed states have a natural representation in terms of tensor product states (TPSs). These TPSs are built from local tensors. They can describe both states with short-range entanglement (such as the symmetry-breaking states) and states with long-range entanglement (such as string-net condensed states with topological/quantum order). The tensor product representation provides a kind of ``mean-field'' description for topologically ordered states and could be a powerful way to study quantum phase transitions between such states. As an attempt in this direction, we show that the constructed TPSs are fixed points under a certain wave-function renormalization-group transformation for quantum states. <s> BIB001
Similar to trace of the matrix, tensor also has a trace. BIB001 BIB001 proposed the concept of tensor trace. Let's first look at the concept of inner indices. If a tensor has the same size for several dimensions, those same size dimensions are called inner indices. For example, a tensor X ∈ R A×B×A has two inner indices. Modes 1 and 3 are both size A. Then, we define the following concept of tensor trace: Let's give an example of the 3rd-order tensor that we have used before.
A Survey on Tensor Techniques and Applications in Machine Learning <s> 15) SHORT SUMMARY <s> Abstract The strong Kronecker product has proved a powerful new multiplication tool for orthogonal matrices. This paper obtains algebraic structure theorems and properties for this new product. The results are then applied to give new multiplication theorems for Hadamard matrices, complex Hadamard matrices and other related orthogonal matrices. We obtain complex Hadamard matrices of order 8abcd from complex Hadamard matrices of order 2a, 2b, 2c, and 2d, and complex Hadamard matrices of order 32abcdef from Hadamard matrices of orders 4a, 4b, 4c, 4d, 4e, and 4f. We also obtain a pair of disjoint amicable OD(8hn; 2hn, 2hn)s from Hadamard matrices of orders 4h and 4n, and Plotkin's result that a pair of amicable OD(4h; 2h, 2h)s and an OD(8h; 2h, 2h, 2h, 2h) can be constructed from an Hadamard matrix of order 4h as a corollary. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 15) SHORT SUMMARY <s> This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 15) SHORT SUMMARY <s> A novel tensor decomposition is proposed to make it possible to identify replicating structures in complex data, such as textures and patterns in music spectrograms. In order to establish a computational framework for this paradigm, we adopt a multiway (tensor) approach. To this end, a novel tensor product is introduced, and the subsequent analysis of its properties shows a perfect match to the task of identification of recurrent structures present in the data. Out of a whole class of possible algorithms, we illuminate those derived so as to cater for orthogonal and nonnegative patterns. Simulations on texture images and a complex music sequence confirm the benefits of the proposed model and of the associated learning algorithms. <s> BIB003 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 15) SHORT SUMMARY <s> The matricized-tensor times Khatri-Rao product computation is the typical bottleneck in algorithms for computing a CP decomposition of a tensor. In order to develop high performance sequential and parallel algorithms, we establish communication lower bounds that identify how much data movement is required for this computation in the case of dense tensors. We also present sequential and parallel algorithms that attain the lower bounds and are therefore communication optimal. In particular, we show that the structure of the computation allows for less communication than the straightforward approach of casting the computation as a matrix multiplication operation. <s> BIB004
The formulas for tensor operations described above are relatively basic ones. Because tensor can be seen as a generalization of matrices and vectors, the above formulas also apply to vectors and matrices (just change the dimension to 1 or 2 in the formulas). Many researchers have also defined some new operations, such as the strong Kronecker product(de Launey and Seberry BIB001 ; Phan et al. BIB003 ) and the mode-n Khatri-Rao product of tensors (Ballard et al.) BIB004 . Based on the Kroneker product, these two operations are just grouped into blocks to perform the Kroneker product operation. This chapter mainly introduces basic calculation formulas commonly used by tensor. If you want to know more about many other formulas, please refer to (Kolda and Bader) BIB002 .
A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 1 <s> Analysis of high dimensional data in modern applications, such as neuroscience, text mining, spectral analysis, chemometrices naturally requires tensor decomposition methods. The Tucker decompositions allow us to extract hidden factors (component matrices) with different dimension in each mode, and investigate interactions among various modalities. The alternating least squares (ALS) algorithms have been confirmed effective and efficient in most of tensor decompositions, especially Tucker with orthogonality constraints. However, for nonnegative Tucker decomposition (NTD), standard ALS algorithms suffer from unstable convergence properties, demand high computational cost for large scale problems due to matrix inverse, and often return suboptimal solutions. Moreover they are quite sensitive with respect to noise, and can be relatively slow in the special case when data are nearly collinear. In this paper, we propose a new algorithm for nonnegative Tucker decomposition based on constrained minimization of a set of local cost functions and hierarchical alternating least squares (HALS). The developed NTD-HALS algorithm sequentially updates components, hence avoids matrix inverse, and is suitable for large-scale problems. The proposed algorithm is also regularized with additional constraint terms such as sparseness, orthogonality, smoothness, and especially discriminant. Extensive experiments confirm the validity and higher performance of the developed algorithm in comparison with other existing algorithms. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 1 <s> Modern applications in engineering and data science are increasinglybased on multidimensional data of exceedingly high volume, variety,and structural richness. However, standard machine learning algorithmstypically scale exponentially with data volume and complexityof cross-modal couplings - the so called curse of dimensionality -which is prohibitive to the analysis of large-scale, multi-modal andmulti-relational datasets. Given that such data are often efficientlyrepresented as multiway arrays or tensors, it is therefore timely andvaluable for the multidisciplinary machine learning and data analyticcommunities to review low-rank tensor decompositions and tensor networksas emerging tools for dimensionality reduction and large scaleoptimization problems. Our particular emphasis is on elucidating that,by virtue of the underlying low-rank approximations, tensor networkshave the ability to alleviate the curse of dimensionality in a numberof applied areas. In Part 1 of this monograph we provide innovativesolutions to low-rank tensor network decompositions and easy to interpretgraphical representations of the mathematical operations ontensor networks. Such a conceptual insight allows for seamless migrationof ideas from the flat-view matrices to tensor network operationsand vice versa, and provides a platform for further developments, practicalapplications, and non-Euclidean extensions. It also permits theintroduction of various tensor network operations without an explicitnotion of mathematical expressions, which may be beneficial for manyresearch communities that do not directly rely on multilinear algebra.Our focus is on the Tucker and tensor train TT decompositions andtheir extensions, and on demonstrating the ability of tensor networksto provide linearly or even super-linearly e.g., logarithmically scalablesolutions, as illustrated in detail in Part 2 of this monograph. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 1 <s> Matrices have become essential data representations for many large-scale problems in data analytics, and hence matrix sketching is a critical task. Although much research has focused on improving the error/size tradeoff under various sketching paradigms, we find a simple heuristic iSVD, with no guarantees, tends to outperform all known approaches. In this paper we adapt the best performing guaranteed algorithm, FrequentDirections, in a way that preserves the guarantees, and nearly matches iSVD in practice. We also demonstrate an adversarial dataset for which iSVD performs quite poorly, but our new technique has almost no error. Finally, we provide easy replication of our studies on APT, a new testbed which makes available not only code and datasets, but also a computing platform with fixed environmental settings. <s> BIB003 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 1 <s> Novel parallel algorithms for tensor completion problems, with applications to recommender systems and function learning.Parallelization strategy offers greatly reduced memory requirements compared to previously published matrix equivalents.Convergence results for both alternating least squares and cyclic coordinate descent. Low-rank tensor completion addresses the task of filling in missing entries in multi-dimensional data. It has proven its versatility in numerous applications, including context-aware recommender systems and multivariate function learning. To handle large-scale datasets and applications that feature high dimensions, the development of distributed algorithms is central. In this work, we propose novel, highly scalable algorithms based on a combination of the canonical polyadic (CP) tensor format with block coordinate descent methods. Although similar algorithms have been proposed for the matrix case, the case of higher dimensions gives rise to a number of new challenges and requires a different paradigm for data distribution. The convergence of our algorithms is analyzed and numerical experiments illustrate their performance on distributed-memory architectures for tensors from a range of different applications. <s> BIB004
The CP Decomposition Algorithm of a 4th-Order Tensor Input: The 4th-order tensor Y ∈ R I ×J ×K ×L Output: Factor matrices A,B,C,D and the core tensor 1: Initialize A,B,C,D and CP rank R, where R ≤ min{IJ , JK , IK }; 2: while the iteration threshold does not reach or the algorithm has not converged do BIB002 : Normalize column vectors of A to unit vector; Normalize column vectors of B to unit vector; BIB003 : Normalize column vectors of C to unit vector; BIB001 : Normalize column vectors of D to unit vector; 11: Save the value of the norms of the R column vectors in the factor matrix C to the core tensor ; 12: end while 13: return Factor matrices A,B,C,D and the core tensor From the above algorithm, we can see that the key to calculate CP decomposition is to calculate Khatri-Rao product and the pseudo inverse of the matrices. (Choi and Vishwanathan ; Karlsson et al. BIB004 ) proposed the least-squares solution method of CP decomposition and the detailed derivation process can be referenced by them.
A Survey on Tensor Techniques and Applications in Machine Learning <s> 3) THE HIERARCHICAL TUCKER DECOMPOSITION <s> The paper presents a new scheme for the representation of tensors which is well-suited for high-order tensors. The construction is based on a hierarchy of tensor product subspaces spanned by orthonormal bases. The underlying binary tree structure makes it possible to apply standard Linear Algebra tools for performing arithmetical operations and for the computation of data-sparse approximations. In particular, a truncation algorithm can be implemented which is based on the standard matrix singular value decomposition (SVD) method. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 3) THE HIERARCHICAL TUCKER DECOMPOSITION <s> We define the hierarchical singular value decomposition (SVD) for tensors of order $d\geq2$. This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in $d=2$), and we prove these. In particular, one can find low rank (almost) best approximations in a hierarchical format ($\mathcal{H}$-Tucker) which requires only $\mathcal{O}((d-1)k^3+dnk)$ parameters, where $d$ is the order of the tensor, $n$ the size of the modes, and $k$ the (hierarchical) rank. The $\mathcal{H}$-Tucker format is a specialization of the Tucker format and it contains as a special case all (canonical) rank $k$ tensors. Based on this new concept of a hierarchical SVD we present algorithms for hierarchical tensor calculations allowing for a rigorous error analysis. The complexity of the truncation (finding lower rank approximations to hierarchical rank $k$ tensors) is in $\mathcal{O}((d-1)k^4+dnk^2)$ and the attainable accuracy is just 2-3 digits less than machine precision. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 3) THE HIERARCHICAL TUCKER DECOMPOSITION <s> We consider the solution of large-scale symmetric eigenvalue problems for which it is known that the eigenvectors admit a low-rank tensor approximation. Such problems arise, for example, from the discretization of high-dimensional elliptic PDE eigenvalue problems or in strongly correlated spin systems. Our methods are built on imposing low-rank (block) tensor train (TT) structure on the trace minimization characterization of the eigenvalues. The common approach of alternating optimization is combined with an enrichment of the TT cores by (preconditioned) gradients, as recently proposed by Dolgov and Savostyanov for linear systems. This can equivalently be viewed as a subspace correction technique. Several numerical experiments demonstrate the performance gains from using this technique. <s> BIB003
(Hackbusch and Khn BIB001 ; Grasedyck BIB002 ) produced the Hierarchical Tucker decomposition. The Hierarchical Tucker(HT) decomposition decomposes tensor in a hierarchical way, and it is similar to a binary tree split. It is important to note that for the HT decomposition, all the core tensor must be less than or equal to the third order. In other words, the factor matrices connected to the core tensor cannot exceed 3. Simpler, if you use a tensor network diagram to illustrate, a core tensor can't have more than three lines connected to it. Also, the HT decomposition model graphs cannot contain any loops. We draw a diagram of the HT decomposition of 5th-order tensor and 6th-order tensor so that we can understand it more intuitively (see figure 15 and figure 16 ). From the figure 15 and the figure 16 , we can find that the first step of HT decomposition is to extract the dimensions to be decomposed. For a 5th-order tensor, we can extract any one dimension or any two dimensions and the steps are repeated until the 5th-order tensor becomes five factor matrices. In fact, we can discover that the HT decomposition replaces the core tensor A of Tucker decomposition, with low-order interconnected kernels, thus forming a distributed tensor network. We draw the conversion of HT decomposition and Tucker decomposition of the 5th-order tensor(see figure 17 ). Of course, we can find that with the increase in dimension, these distributed networks (HT decomposition networks) are not unique(see figure 16 ). . Schematic diagram of HT decomposition of 5th-order tensor, in which the core tensor is split into two small-size 3rd-order tensors A 12 , A 345 , and the right core tensor is split into the factor matrix B 3 and the 3rd-order core tensor of smaller size A 45 . Finally, A 12 and A 45 continue to be decomposed into the last four factor matrices B 1 , B 2 , B 3 , B 4 . The diagram on the right is the HT tensor network structure diagram with the core tensor A 12345 in the original left image replaced by a connecting line. figure 15 , it is noted here that since the decomposition of the core tensor is different at the beginning, there are two kinds of decomposition cases. The above figure is to decompose A 123456 according to dimensions 12 and 3456 respectively, and the following figure is to decompose A 123456 according to dimensions 123 and 456 respectively. The results are not the same but both the HT decomposition. We can use the vector form of the Tucker decomposition to explain the HT decomposition network in figure 15 . In fact, the core idea is to replace the core tensor with smaller dimension of tensors until the original tensor is decomposed into factor matrices. Finally, the original tensor is decomposed into a case where several 3rd-order tensors and several factor matrices are connected to each other. Here we introduce the HT decomposition of the 5th-order and the 6th-order tensor. The higher order tensor HT decomposition of the tensor network diagram can be drawn with a similar example and for more details please refer to (Tobler [22] ; Kressner et al. BIB003 ). After Tucker decomposition, although the size of the core tensor is reduced, the dimension of the core tensor is still the same as before. When the original tensor dimension is very large (for example, greater than 10), we usually express it with the distributed tensor network similar to the HT decomposition. That is, the dimension of core tensor is not limited to the 3rd order. According to the actually need, it can be 4th or 5th order (see figure 18 ). FIGURE 18. The blue rectangles represent the core tensors and the red circles represent the factor matrices. The diagram on the left is an 18th-order tensor HT decomposition tensor network diagram, in which the 4th-order small size core tensors are connected to each other. The diagram on the right is a 20th-order tensor HT decomposition tensor network diagram, in which the 5th-order small size core tensors are connected to each other.
A Survey on Tensor Techniques and Applications in Machine Learning <s> 4) THE HIGHER ORDER SVD(HOSVD) DECOMPOSITION <s> We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higher-order tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, first-order perturbation effects, etc., are analyzed. We investigate how tensor symmetries affect the decomposition and propose a multilinear generalization of the symmetric eigenvalue decomposition for pair-wise symmetric tensors. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 4) THE HIGHER ORDER SVD(HOSVD) DECOMPOSITION <s> Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 4) THE HIGHER ORDER SVD(HOSVD) DECOMPOSITION <s> We present an alternative strategy for truncating the higher-order singular value decomposition (T-HOSVD). An error expression for an approximate Tucker decomposition with orthogonal factor matrices is presented, leading us to propose a novel truncation strategy for the HOSVD, which we refer to as the sequentially truncated higher-order singular value decomposition (ST-HOSVD). This decomposition retains several favorable properties of the T-HOSVD, while reducing the number of operations required to compute the decomposition and practically always improving the approximation error. Three applications are presented, demonstrating the effectiveness of ST-HOSVD. In the first application, ST-HOSVD, T-HOSVD, and higher-order orthogonal iteration (HOOI) are employed to compress a database of images of faces. On average, the ST-HOSVD approximation was only $0.1\%$ worse than the optimum computed by HOOI, while cutting the execution time by a factor of $20$. In the second application, classification of handwritten digits, ST-HOSVD achieved a speedup factor of $50$ over T-HOSVD during the training phase, and reduced the classification time and storage costs, while not significantly affecting the classification error. The third application demonstrates the effectiveness of ST-HOSVD in compressing results from a numerical simulation of a partial differential equation. In such problems, ST-HOSVD inevitably can greatly improve the running time. We present an example wherein the $2$ hour $45$ minute calculation of T-HOSVD was reduced to just over one minute by ST-HOSVD, representing a speedup factor of $133$, while even improving the memory consumption. <s> BIB003
The high-order singular value decomposition of tensor can be considered as another special form of Tucker decomposition (De Lathauwer et al.) BIB001 , where the factor matrices and the core tensor are all orthogonal. The definition of core tensor orthogonality is as follows: 1. The tensor slices in each mode of a tensor should mutually orthogonal, such as, for a 3rd-order tensor A ∈ R I ×J ×K In fact, the orthogonal constraints of tensors and the constraints of matrix SVD decomposition are very similar. Similar to the truncated SVD decomposition of the matrix, the tensor also has a truncated HOSVD decomposition (see figure 19 ). The first step in finding the solution of HOSVD decomposition is to first perform the mode-n matricization of the original input tensor and then use a truncated or randomized SVD to find the factor matrices(see equation 157) When the factor matrix is obtained, the core tensor can be decomposed using the following formula: where X ∈ R I 1 ×I 2 ···I N is the input tensor, A ∈ R R 1 ×R 2 ···R N is the core tensor, and B n ∈ R I n ×R n are the factor matrices. See Algorithm 2 for details and refer to (Vannieuwenhoven et al. BIB003 ; Halko et al. BIB002 ).
A Survey on Tensor Techniques and Applications in Machine Learning <s> 5: <s> Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 5: <s> A new canonical polyadic (CP) decomposition method is proposed in this letter, where one factor matrix is extracted first by using any standard blind source separation (BSS) method and the remainder components are computed efficiently via sequential singular value decompositions of rank-1 matrices. The new approach provides more interpretable factors and it is extremely efficient for ill-conditioned problems. Especially, it overcomes the bottleneck problems, which often cause very slow convergence speed in CP decompositions. Simulations confirmed the validity and efficiency of the proposed method. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 5: <s> In [13], Hillar and Lim famously demonstrated that"multilinear (tensor) analogues of many efficiently computable problems in numerical linear algebra are NP-hard". Despite many recent advancements, the state-of-the-art methods for computing such `tensor analogues' still suffer severely from the curse of dimensionality. In this paper we show that the Tucker core of a tensor however, retains many properties of the original tensor, including the CP rank, the border rank, the tensor Schatten quasi norms, and the Z-eigenvalues. When the core tensor is smaller than the original tensor, this property leads to considerable computational advantages as confirmed by our numerical experiments. In our analysis, we in fact work with a generalized Tucker-like decomposition that can accommodate any full column-rank factor matrices. <s> BIB003 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 5: <s> Detecting layout hotspots is a key step in the physical verification flow. Although machine learning solutions show benefits over lithography simulation and pattern matching-based methods, it is still hard to select a proper model for large scale problems and inevitably, performance degradation occurs. To overcome these issues, in this paper we develop a deep learning framework for high performance and large scale hotspot detection. First, we use feature tensor generation to extract representative layout features that fit well with convolutional neural networks while keeping the spatial relationship of the original layout pattern with minimal information loss. Second, we propose a biased learning algorithm to train the convolutional neural network to further improve detection accuracy with small false alarm penalties. In addition, to simplify the training procedure and seek a better trade-off between accuracy and false alarms, we extend the original biased learning to a batch biased learning algorithm. Experimental results show that our framework outperforms previous machine learning-based hotspot detectors in both ICCAD 2012 Contest benchmarks and large scale industrial benchmarks. Source code and trained models are available at https://github.com/phdyang007/dlhsd. <s> BIB004
A mn ← S 1 n V T n1 ; 6: end for 7: A = X × 1m B T 1 × 2m B T 2 · · · × Nm B T N ; 8: return the core tensor A and factor matrices B n VOLUME 7, 2019 After performing the mode-n matricization of the tensor, if the tensor size is too large, we can also obtain the factor matrices by matrix partitioning, as follows: where we divide the resulting matrix (called the unfolded matrix) X mn into M parts. Then we use the eigenvalue decom- And we can get V mn = X T mn U n (S n ) −1 . Thus, computational complexity and computational memory will be decreased and the efficiency will be improved to some extent by matrix partitioning. At the same time, it also alleviates the curse of dimension problem. Some researchers proposed a random SVD decomposition algorithm for matrices with large size and low rank. (Halko et al.) BIB001 reduced the original input matrix to a small size matrix by random sketching, i.e., by multiplying a random sampling matrix (see Algorithm 3). A n+1 = A n+1 × 2m R n ; 6: end for 7: for n=N to 1 do 8: · · · , A N ); 9: [Q n , R n ] = QR − decomposition of A n mc2 ; 10: end for 12: end if 13: original image X, and the original image X can be accurately recovered from the transformed 3D image Y . Image-based feature tensor generation is generally generated by the following steps (see algorithm BIB003 . We also made a picture to show the process of generating feature tensors (see figure BIB002 We can recover the original image by reversing the above steps. For the feature tensor, it is highly compatible with the deep learning method commonly used in images, Convolutional Neural Network (CNN). So for general image processing, it can be classified firstly by finding the feature tensor of the image and then we can use CNN to classify. Similar to CNN's convolutional layer, this operation reduces the size of the original image because n and k are smaller than the size N of the original input image, which can significantly reduce computing time and memory consumption. For details, please refer to (Yang et al.) BIB004 . A n mn = S n (1 : R K , 1 : R K )V n (:, 1 : R k ) T ; 6: end for 7: A = A N ; 8: return the core tensor A and factor matrices B n When the improved HOSVD decomposition algorithm is completed, we obtain the factor matrices B n and the low rank approximate solution of the original tensor X by the mode-n product of the original input tensor and the factor matrices, as follows: where i = 1, · · · , N , so we can get N low rank approximate solutions of X : Z 1 , Z 2 , · · · , Z N . We take the average of these N numbers as a best approximation of the original tensor X . After the previous steps, we first perform a zerocompensation operation for the missing data of X , and we get the fulfilled tensor X . And then we get the approximate solution of it. Finally, for missing values, we update it with the following formula: where ¬ is the Boolean NOT operator (i.e., 0 ← 1, 1 ← 0). The entire tensor completion algorithm is shown in algorithm 17.
A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 3 The Random SVD Decomposition Algorithm for Large-Size and Low Rank Matrices (Halko et al.) [96] <s> We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higher-order tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, first-order perturbation effects, etc., are analyzed. We investigate how tensor symmetries affect the decomposition and propose a multilinear generalization of the symmetric eigenvalue decomposition for pair-wise symmetric tensors. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 3 The Random SVD Decomposition Algorithm for Large-Size and Low Rank Matrices (Halko et al.) [96] <s> Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 3 The Random SVD Decomposition Algorithm for Large-Size and Low Rank Matrices (Halko et al.) [96] <s> We present an alternative strategy for truncating the higher-order singular value decomposition (T-HOSVD). An error expression for an approximate Tucker decomposition with orthogonal factor matrices is presented, leading us to propose a novel truncation strategy for the HOSVD, which we refer to as the sequentially truncated higher-order singular value decomposition (ST-HOSVD). This decomposition retains several favorable properties of the T-HOSVD, while reducing the number of operations required to compute the decomposition and practically always improving the approximation error. Three applications are presented, demonstrating the effectiveness of ST-HOSVD. In the first application, ST-HOSVD, T-HOSVD, and higher-order orthogonal iteration (HOOI) are employed to compress a database of images of faces. On average, the ST-HOSVD approximation was only $0.1\%$ worse than the optimum computed by HOOI, while cutting the execution time by a factor of $20$. In the second application, classification of handwritten digits, ST-HOSVD achieved a speedup factor of $50$ over T-HOSVD during the training phase, and reduced the classification time and storage costs, while not significantly affecting the classification error. The third application demonstrates the effectiveness of ST-HOSVD in compressing results from a numerical simulation of a partial differential equation. In such problems, ST-HOSVD inevitably can greatly improve the running time. We present an example wherein the $2$ hour $45$ minute calculation of T-HOSVD was reduced to just over one minute by ST-HOSVD, representing a speedup factor of $133$, while even improving the memory consumption. <s> BIB003 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 3 The Random SVD Decomposition Algorithm for Large-Size and Low Rank Matrices (Halko et al.) [96] <s> We present a method for computing reduced-order models of parameterized partial differential equation solutions. The key analytical tool is the singular value expansion of the parameterized solution, which we approximate with a singular value decomposition of a parameter snapshot matrix. To evaluate the reduced-order model at a new parameter, we interpolate a subset of the right singular vectors to generate the reduced-order model's coefficients. We employ a novel method to select this subset that uses the parameter gradient of the right singular vectors to split the terms in the expansion, yielding a mean prediction and a prediction covariance---similar to a Gaussian process approximation. The covariance serves as a confidence measure for the reduced-order model. We demonstrate the efficacy of the reduced-order model using a parameter study of heat transfer in random media. The high-fidelity simulations produce more than 4TB of data; we compute the singular value decomposition and evaluate the reduced-or... <s> BIB004 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 3 The Random SVD Decomposition Algorithm for Large-Size and Low Rank Matrices (Halko et al.) [96] <s> As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data, assuming double precision. By viewing the data as a dense five-way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 5000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed-memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments. <s> BIB005 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 3 The Random SVD Decomposition Algorithm for Large-Size and Low Rank Matrices (Halko et al.) [96] <s> The singular value decomposition (SVD) of large-scale matrices is a key tool in data analytics and scientific computing. The rapid growth in the size of matrices further increases the need for developing efficient large-scale SVD algorithms. Randomized SVD based on one-time sketching has been studied, and its potential has been demonstrated for computing a low-rank SVD. Instead of exploring different single random sketching techniques, we propose a Monte Carlo type integrated SVD algorithm based on multiple random sketches. The proposed integration algorithm takes multiple random sketches and then integrates the results obtained from the multiple sketched subspaces. So that the integrated SVD can achieve higher accuracy and lower stochastic variations. The main component of the integration is an optimization problem with a matrix Stiefel manifold constraint. The optimization problem is solved using Kolmogorov-Nagumo-type averages. Our theoretical analyses show that the singular vectors can be induced by population averaging and ensure the consistencies between the computed and true subspaces and singular vectors. Statistical analysis further proves a strong Law of Large Numbers and gives a rate of convergence by the Central Limit Theorem. Preliminary numerical results suggest that the proposed integrated SVD algorithm is promising. <s> BIB006 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 3 The Random SVD Decomposition Algorithm for Large-Size and Low Rank Matrices (Halko et al.) [96] <s> How can we analyze large-scale real-world data with various attributes? Many real-world data (e.g., network traffic logs, web data, social networks, knowledge bases, and sensor streams) with multiple attributes are represented as multi-dimensional arrays, called tensors. For analyzing a tensor, tensor decompositions are widely used in many data mining applications: detecting malicious attackers in network traffic logs (with source IP, destination IP, port-number, timestamp), finding telemarketers in a phone call history (with sender, receiver, date), and identifying interesting concepts in a knowledge base (with subject, object, relation). However, current tensor decomposition methods do not scale to large and sparse real-world tensors with millions of rows and columns and `fibers.' In this paper, we propose HaTen2, a distributed method for large-scale tensor decompositions that runs on the MapReduce framework. Our careful design and implementation of HaTen2 dramatically reduce the size of intermediate data and the number of jobs leading to achieve high scalability compared with the state-of-the-art method. Thanks to HaTen2, we analyze big real-world sparse tensors that cannot be handled by the current state of the art, and discover hidden concepts. <s> BIB007 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 3 The Random SVD Decomposition Algorithm for Large-Size and Low Rank Matrices (Halko et al.) [96] <s> Tensor decompositions have applications in many areas including signal processing, machine learning, computer vision and neuroscience. In this paper, we propose two new differentially private algorithms for orthogonal decomposition of symmetric tensors from private or sensitive data; these arise in applications such as latent variable models. Differential privacy is a formal privacy framework that guarantees protections against adversarial inference. We investigate the performance of these algorithms with varying privacy and database parameters and compare against another recently proposed privacy-preserving algorithm. Our experiments show that the proposed algorithms provide very good utility even while preserving strict privacy guarantees. <s> BIB008
Input: The large-size and low rank matrix X ∈ R I ×J , estimated rank R, oversampling parameter P, overestimated rank R = R + P, exponent of the power method q (q=0 or 1) Output: Compute the QR decomposition of the sample matrix Y = QR; 4: Calculate the small-size matrix A = Q T X ∈ R R×J ; 5: Compute the SVD of the small-size matrix A = U SV T ; 6: Calculate the orthogonal matrix U = Q U ; The advantage of using the overestimated rank of the matrix is that it can achieve a more accurate approximation of the matrix. (Chen et al.) BIB006 improved the approximation of SVD decomposition by integrating multiple random sketches, that is, multiplying the input matrix X by a set of random Gaussian matrices. (Halko et al.) BIB002 used a special sampling matrix to greatly reduce the execution time of the algorithm while reducing complexity. However, for a matrix with a slow singular value decay, this method will result in a lower accuracy of SVD. Many researchers developed a variety of different algorithms to solve the HOSVD decomposition. For details, please refer to (Vannieuwenhoven et al. BIB003 ; Austin et al. BIB005 ; Constantine et al. BIB004 ). Compared with the truncated SVD decomposition of the standard matrix, the tensor HOSVD decomposition does not produce the best multiple linear rank, but only the weak linear rank approximation (De Lathauwer et al. ) BIB001 : BIB008 where X Prefect is the best approximation for X . In order to find an accurate approximation of Tucker decomposition, researchers have extended the alternating least squares method to the higher-order orthogonal iterations (Jeon et al. BIB007 ; Austin et al. BIB005 ;Constantine et al. BIB004 ; De Lathauwer et al. ). For details, please refer to Algorithm 4.
A Survey on Tensor Techniques and Applications in Machine Learning <s> 6) THE TENSOR CROSS-APPROXIMATION <s> Motivated by numerous applications in which the data may be modeled by a variable subscripted by three or more indices, we develop a tensor-based extension of the matrix CUR decomposition. The tensor-CUR decomposition is most relevant as a data analysis tool when the data consist of one mode that is qualitatively different than the others. In this case, the tensor-CUR decomposition approximately expresses the original data tensor in terms of a basis consisting of underlying subtensors that are actual data elements and thus that have natural interpretation in terms ofthe processes generating the data. In order to demonstrate the general applicability of this tensor decomposition, we apply it to problems in two diverse domains of data analysis: hyperspectral medical image analysis and consumer recommendation system analysis. In the hyperspectral data application, the tensor-CUR decomposition is used to compress the data, and we show that classification quality is not substantially reduced even after substantial data compression. In the recommendation system application, the tensor-CUR decomposition is used to reconstruct missing entries in a user-product-product preference tensor, and we show that high quality recommendations can be made on the basis of a small number of basis users and a small number of product-product comparisons from a new user. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 6) THE TENSOR CROSS-APPROXIMATION <s> In this article, the construction of nested bases approximations to discretizations of integral operators with oscillatory kernels is presented. The new method has log-linear complexity and generalizes the adaptive cross approximation method to high-frequency problems. It allows for a continuous and numerically stable transition from low to high frequencies. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 6) THE TENSOR CROSS-APPROXIMATION <s> We propose a new method for the efficient approximation of a class of highly oscillatory weighted integrals where the oscillatory function depends on the frequency parameter $\omega \geq 0$, typically varying in a large interval. Our approach is based, for fixed but arbitrary oscillator, on the pre-computation and low-parametric approximation of certain $\omega$-dependent prototype functions whose evaluation leads in a straightforward way to recover the target integral. The difficulty that arises is that these prototype functions consist of oscillatory integrals and are itself oscillatory which makes them both difficult to evaluate and to approximate. Here we use the quantized-tensor train (QTT) approximation method for functional $m$-vectors of logarithmic complexity in $m$ in combination with a cross-approximation scheme for TT tensors. This allows the accurate approximation and efficient storage of these functions in the wide range of grid and frequency parameters. Numerical examples illustrate the efficiency of the QTT-based numerical integration scheme on various examples in one and several spatial dimensions. <s> BIB003
Before we discuss the concept of the Tensor Cross-Approximation, we first introduce some concepts of matrix cross approximation. (Bebendorf et al. BIB002 ; Khoromskij and Veit BIB003 ) proposed the concept of the matrix cross approximation(MCA). The main role of the MCA is to reduce the size of the original large-size matrix by finding a linear combination of several components of the matrix, thereby decreasing computational complexity and computational memory. These components are usually a small fraction of the original matrix. This method has a premise that the original matrix is highly redundant, so it can be approximated by a small size matrix with some marginal information lost. We illustrate the MCA in figure 23 . From figure 23 we give the specific formula of the MCA method: where A ∈ R I ×A is a small size matrix obtained by selecting appropriate A columns from the original matrix X. B ∈ R A×B is a small size matrix obtained by selecting the appropriate B rows from the original matrix X. C ∈ R B×J is a small size matrix obtained from the appropriate selected B rows in the original matrix X. E ∈ R I ×J is the redundant matrix (error matrix). Obviously, if the elements of the error matrix are small enough, we can convert the MCA decomposition of X above into a CR matrix decomposition. Note that in order to reduce the size, A J and B I , and minimize the F norm of the redundancy matrix E F , the choices of A and B are also very important. Generally, if A and B are given, then the three matrices can be obtained according to the method as shown in figure 23 (split the original matrix and then get three matrices from it). Another special property is that when rank(X ) ≤ min(A, B), the matrix cross-approximation solution is exact or the error matrix E at this time is very small and can be ignored, i.e., X = ABC. Now we extend the concept of the MCA to the form of tensor, i.e., tensor cross-approximation (TCA). There are usually two ways to implement TCA. 1. (Mahoney et al.) BIB001 extended MCA to the matrix form of tensor data (that is, find the matricization of the tensors and then implement MCA).
A Survey on Tensor Techniques and Applications in Machine Learning <s> 7) THE TENSOR TRAIN AND TENSOR CHAIN DECOMPOSITION <s> For $d$-dimensional tensors with possibly large $d>3$, an hierarchical data structure, called the Tree-Tucker format, is presented as an alternative to the canonical decomposition. It has asymptotically the same (and often even smaller) number of representation parameters and viable stability properties. The approach involves a recursive construction described by a tree with the leafs corresponding to the Tucker decompositions of three-dimensional tensors, and is based on a sequence of SVDs for the recursively obtained unfolding matrices and on the auxiliary dimensions added to the initial “spatial” dimensions. It is shown how this format can be applied to the problem of multidimensional convolution. Convincing numerical examples are given. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 7) THE TENSOR TRAIN AND TENSOR CHAIN DECOMPOSITION <s> A simple nonrecursive form of the tensor decomposition in $d$ dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on low-rank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 7) THE TENSOR TRAIN AND TENSOR CHAIN DECOMPOSITION <s> Modern applications in engineering and data science are increasinglybased on multidimensional data of exceedingly high volume, variety,and structural richness. However, standard machine learning algorithmstypically scale exponentially with data volume and complexityof cross-modal couplings - the so called curse of dimensionality -which is prohibitive to the analysis of large-scale, multi-modal andmulti-relational datasets. Given that such data are often efficientlyrepresented as multiway arrays or tensors, it is therefore timely andvaluable for the multidisciplinary machine learning and data analyticcommunities to review low-rank tensor decompositions and tensor networksas emerging tools for dimensionality reduction and large scaleoptimization problems. Our particular emphasis is on elucidating that,by virtue of the underlying low-rank approximations, tensor networkshave the ability to alleviate the curse of dimensionality in a numberof applied areas. In Part 1 of this monograph we provide innovativesolutions to low-rank tensor network decompositions and easy to interpretgraphical representations of the mathematical operations ontensor networks. Such a conceptual insight allows for seamless migrationof ideas from the flat-view matrices to tensor network operationsand vice versa, and provides a platform for further developments, practicalapplications, and non-Euclidean extensions. It also permits theintroduction of various tensor network operations without an explicitnotion of mathematical expressions, which may be beneficial for manyresearch communities that do not directly rely on multilinear algebra.Our focus is on the Tucker and tensor train TT decompositions andtheir extensions, and on demonstrating the ability of tensor networksto provide linearly or even super-linearly e.g., logarithmically scalablesolutions, as illustrated in detail in Part 2 of this monograph. <s> BIB003
CP decomposition is a special case of Tucker decomposition. The core tensor of Tucker decomposition is further decomposed into hierarchical tree structure and becomes HT decomposition. The Tensor Chain(TC) decomposition is a special case of HT decomposition. The core tensor is in series and aligned, i.e., every core tensor has the same dimension, and at the same time, all the factor matrices are unit matrices. The advantage of having the same form of core tensor and unit matrix is that it can significantly reduce the amount of computation, facilitate subsequent optimization, and so on. The Tensor Train(TT) decomposition is also a special case of HT decomposition. (Oseledet BIB002 and Oseledet and Tyrtyshnikov BIB001 ) first put forward the concept of TT decomposition. The only difference between TT decomposition and TC decomposition is that the dimension of the first and the Nth core tensor is one less than the dimension of the intermediate N-2 core tensors in TT decomposition. In different domains, TT decomposition has different names. Generally speaking, in the field of physics, when we refer to the Tensor Chain(TC) decomposition as the Matrix Product State (MPS) decomposition with periodic boundary conditions(PBC), we also refer to the TT decomposition as the Matrix Product State (MPS) decomposition with the Open Boundary Conditions. Before we give the concrete expression, we draw a picture to give an intuitive explanation of the TT decomposition and the TC decomposition (see figure 26 and figure 27 ). In figure 26 and figure 27, we first transform the large size vector and matrix into the Nth-order and 2Nth-order small size tensor, respectively. Then we decompose them by TT or TC. We can see that the only difference between TT decomposition and TC decomposition is that TC decomposition connects the first core tensor and the last core tensor with a single line R N . Then we give a concrete mathematical expression of TT decomposition of an Nth-order tensor Y ∈ R I 1 ×I 2 ×I 3 ×···I N . where y i 1 ,i 2 ,··· ,i N and a n r n−1 ,i n ,r n are entries of Y and A n , respectively. where a r n−1 ,r n n = A n (r n−1 , :, r n ) ∈ R I n are tensor fiber(vectors). The above three formulas are TT decomposition formula corresponding to the large-size vector decomposed into Nth-order tensors (that is, figure 26 ). Similar to the TT decomposition for the Nth-order tensor, the TT decomposition for the 2Nth-order tensor (see figure 27 ) is as follows: where y i 1 ,i 2 ,··· ,i N and a n r n−1 ,i n ,j n ,r n are entries of Y and A n , respectively. where A r n−1 ,r n n = A n (r n−1 , :, :, r n ) ∈ R I n ×J n are tensor slice(matrices). Similarly, the 3rd-order large-size tensor or higher-order large-size tensor can be decomposed by TT in a similar way (by decomposing them into 3Nth-order or higher tensor.) Here we no longer give the mathematical expression of the TC decomposition, because there is almost no difference between the TT decomposition and TC decomposition (mainly the first and last core tensors have a dimension with a size of R n ). Here we give three common methods. The first is the product form between the core tensor contractions, the second is the expression between the scalars and the third is the outer product of tensor slice or the outer product of tensor fiber. There are some other mathematical expressions for other uses, such as, the TT decomposition can be calculated by performing the mode-n matricization of the core tensor and then we can use the strong Kronecker product or tensor slices to calculate. Those who are interested can refer to (Cichocki et al.) BIB003 . Similar to the CP rank,we define the TT rank. Here we add a concept. We have previously introduced the definition of mode-n matricization of tensor. But in fact, there are two ways to perform the matricization of tensor. One of them is to extract one dimension as the first dimension of the resulting matrix, and the remaining N-1 dimensions as the second dimension. The other is to extract n dimensions from the original tensor as the first dimension of the resulting matrix, and the remaining (N-n) dimensions as the second dimension. We call the latter mode-n cannonical matricization of the tensor: where m means matricization, c means cannonical, n means mode-n. According to the mathematical expression of TT decomposition and the definition of TT rank, we give the computational complexity of TT decomposition. We can see from the formula that the complexity is related to the TT rank. Thus, we need to find a suitable low rank TT decomposition to reduce the complexity.
A Survey on Tensor Techniques and Applications in Machine Learning <s> 8) THE TENSOR NETWORKS(DECOMPOSITIONS) WITH CYCLES <s> Tensor network representations of many-body quantum systems can be described in terms of quantum channels. We focus on channels associated with the multiscale entanglement renormalization ansatz tensor network that has been recently introduced to efficiently describe critical systems. Our approach allows us to compute the multiscale entanglement renormalization ansatz correspondent to the thermodynamical limit of a critical system introducing a transfer matrix formalism, and to relate the system critical exponents to the convergence rates of the associated channels. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 8) THE TENSOR NETWORKS(DECOMPOSITIONS) WITH CYCLES <s> This article reviews recent developments in the theoretical understanding and the numerical implementation of variational renormalization group methods using matrix product states and projected entangled pair states. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 8) THE TENSOR NETWORKS(DECOMPOSITIONS) WITH CYCLES <s> We introduce a framework for characterizing Matrix Product States (MPS) and Projected Entangled Pair States (PEPS) in terms of symmetries. This allows us to understand how PEPS appear as ground states of local Hamiltonians with finitely degenerate ground states and to characterize the ground state subspace. Subsequently, we apply our framework to show how the topological properties of these ground states can be explained solely from the symmetry: We prove that ground states are locally indistinguishable and can be transformed into each other by acting on a restricted region, we explain the origin of the topological entropy, and we discuss how to renormalize these states based on their symmetries. Finally, we show how the anyonic character of excitations can be understood as a consequence of the underlying symmetries. <s> BIB003 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 8) THE TENSOR NETWORKS(DECOMPOSITIONS) WITH CYCLES <s> I present an example of how to analytically optimize a multiscale entanglement renormalization ansatz for a finite antiferromagnetic Heisenberg chain. For this purpose, a quantum-circuit representation is taken into account, and we construct the exactly entangled ground state so that a trivial IR state is modified sequentially by operating separated entangler layers (monodromy operators) at each scale. The circuit representation allows us to make a simple understanding of close relationship between the entanglement renormalization and quantum integrability. We find that the entangler should match with the $R$-matrix, not a simple unitary, and also find that the optimization leads to the mapping between the Bethe roots and the Daubechies wavelet coefficients. <s> BIB004 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 8) THE TENSOR NETWORKS(DECOMPOSITIONS) WITH CYCLES <s> Modern applications in engineering and data science are increasinglybased on multidimensional data of exceedingly high volume, variety,and structural richness. However, standard machine learning algorithmstypically scale exponentially with data volume and complexityof cross-modal couplings - the so called curse of dimensionality -which is prohibitive to the analysis of large-scale, multi-modal andmulti-relational datasets. Given that such data are often efficientlyrepresented as multiway arrays or tensors, it is therefore timely andvaluable for the multidisciplinary machine learning and data analyticcommunities to review low-rank tensor decompositions and tensor networksas emerging tools for dimensionality reduction and large scaleoptimization problems. Our particular emphasis is on elucidating that,by virtue of the underlying low-rank approximations, tensor networkshave the ability to alleviate the curse of dimensionality in a numberof applied areas. In Part 1 of this monograph we provide innovativesolutions to low-rank tensor network decompositions and easy to interpretgraphical representations of the mathematical operations ontensor networks. Such a conceptual insight allows for seamless migrationof ideas from the flat-view matrices to tensor network operationsand vice versa, and provides a platform for further developments, practicalapplications, and non-Euclidean extensions. It also permits theintroduction of various tensor network operations without an explicitnotion of mathematical expressions, which may be beneficial for manyresearch communities that do not directly rely on multilinear algebra.Our focus is on the Tucker and tensor train TT decompositions andtheir extensions, and on demonstrating the ability of tensor networksto provide linearly or even super-linearly e.g., logarithmically scalablesolutions, as illustrated in detail in Part 2 of this monograph. <s> BIB005
In the above sections, we briefly introduced TT decomposition, HT decomposition and other tree tensor networks. We should note that all the tensor decomposition networks mentioned above do not contain a circle(except TC). We also mentioned in the previous section that the TT rank usually increases with the growth of the dimension of original data tensor that needs to be decomposed. As the depth of decomposition increases for an arbitrary tree-shaped tensor network, the TT rank will also increase. In order to reduce the TT rank, researchers invented some layered tensor networks with Loops. (Verstraete et al. BIB002 ; Schuch et al. BIB003 ) proposed Projected Entangled Pair States(PEPS) and Projected Entangled Pair Operators(PEPO), respectively (see figure 28 ). In these two kinds of tensor networks, they replaced the 3rd-order core tensors of the original TT decomposition with 5th and 6th-order core tensors, respectively. But they reduced tensor rank at the expense of higher complexity because the original 3rd-order core tensor rises to 5th order, 6th order. Sometimes for some higher order tensors in science and physics, it may not be enough to reduce the rank for the above two kinds of networks. Some researchers have proposed new tensor networks with more circles. (Giovannetti et al. BIB001 ; Matsueda BIB004 ) produced the Honey-Comb Lattice(HCL) and the Multi-scale Entanglement Renormalization Ansatz(MERA), respectively (see figure 29 ). They used the 3rd and 4th-order core tensors, respectively. However, as the number of cycles increases, the overall computational complexity of the network The blue rectangles represent core tensors. They use the 5th and 6th-order core tensors, respectively BIB005 . increases, i.e., we need to calculate more circles. In short, in order to reach the balance between rank and complexity in practice, the network will be selected according to the need. Compared with the former two tensor networks with cycles, the size and dimension of the core tensor in MERA are usually smaller, so the number of unknown parameters (variables or free parameters) will be reduced, and the corresponding computational complexity will also be decreased. At the same time, the MERA network with cycles can help us find the relationship and interaction between tensor and free parameters. In general, the main idea of these four methods is to reduce TT rank by increasing the number of core tensors and reducing the size of core tensors but usually at the cost of increasing computational complexity. The advantage of small size tensor is that it is easier to manage, and it can reduce the number of free parameters in the network. For a single small-scale tensor, the calculation is relatively simple. At the same time, we can see that due to the cycle structure, these four networks can usually describe the correlation between variables well.
A Survey on Tensor Techniques and Applications in Machine Learning <s> D. THE NATURE AND ALGORITHM OF TT DECOMPOSITION 1) BASIC OPERATIONS IN TT DECOMPOSITION <s> We study the tensor structure of two operations: the transformation of a given multidimensional vector into a multilevel Toeplitz matrix and the convolution of two given multidimensional vectors. We show that the low-rank tensor structure of the input is preserved in the output and propose efficient algorithms for these operations in the newly introduced quantized tensor train (QTT) format. Consider a $d$-dimensional $2n \times\cdots\times 2n$-vector $\boldsymbol{x}$. If it is represented elementwise, the number of parameters is $(2n)^{d}$. However, if we assume that $\boldsymbol{x}$ is given in a QTT representation with ranks bounded by $p$, the number of parameters is reduced to $\mathcal{O}\left(dp^{2} \log n\right)$. Under this assumption we show how the multilevel Toeplitz matrix generated by $\boldsymbol{x}$ can be obtained in the QTT format with ranks bounded by $2p$ in $\mathcal{O}\left(dp^{2} \log n\right)$ operations. We also describe how the convolution $\boldsymbol{x}\star\boldsymbol{y}$ of $\... <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> D. THE NATURE AND ALGORITHM OF TT DECOMPOSITION 1) BASIC OPERATIONS IN TT DECOMPOSITION <s> We discuss extended definitions of linear and multilinear operations such as Kronecker, Hadamard, and contracted products, and establish links between them for tensor calculus. Then we introduce effective low-rank tensor approximation techniques including Candecomp/Parafac, Tucker, and tensor train (TT) decompositions with a number of mathematical and graphical representations. We also provide a brief review of mathematical properties of the TT decomposition as a low-rank approximation technique. With the aim of breaking the curse-of-dimensionality in large-scale numerical analysis, we describe basic operations on large-scale vectors, matrices, and high-order tensors represented by TT decomposition. The proposed representations can be used for describing numerical methods based on TT decomposition for solving large-scale optimization problems such as systems of linear equations and symmetric eigenvalue problems. <s> BIB002
If large-size tensors are given in the form of TT decomposition, then many calculations can be performed on the small-size core tensors. By performing operations on smallsize core tensors, the unknown parameters can be reduced effectively, and the operations can be simplified to achieve the effect of the optimization algorithm. Consider two Nth-order tensors in TT decomposition: where the core tensors X n ∈ R R n−1 ×I n ×R n , Y n ∈ R Q n−1 ×I n ×Q n and their TT ranks are r TT (X ) = (R 1 , · · · , R N −1 ) and r TT (Y ) = (Q 1 , · · · , Q N −1 ), respectively. Note that the size and dimension of two tensors are the same. Their operations have the following properties: 1. the Hadamard product of two tensors: We can use the tensor slice to represent the core tensor Z. where Z n ∈ R R n−1 Q n−1 ×I n ×R n Q n is the core tensor and Z is the tensor slice (fix the second dimension i n to get). 2. the sum of two tensors: where its TT rank r TT , similar to the previous one, we can still use tensor slice to represent Z. Note that the tensor slices of the first and last core tensors are as follows: 3. the quantitative product of two tensors: Then we calculate the final result by iterative method, the specific process reference algorithm 5. 4. the multiplication of large-size matrix and vector: where The Quantitative Product of Two Tensors Expressed in the Form of TT Decomposition Input: The two Nth-order tensors the quantitative product of the two tensors The multiplication of large-size matrix and vector. are decomposed in TT. We give an intuitive picture to show it(see figure 30 ). As we can see from the figure 30, A n ∈ R A n−1 ×I n ×J n ×A n , X n ∈ R R n−1 ×J n ×R n , Y n ∈ R Q n−1 ×I n ×Q n . If starting from the form of the outer product of the TT decomposition, it is as follows: then the multiplication of a matrix and a vector is equivalent to: y r n−1 ,r n n = A a n−1 ,a n n x r n−1 ,r n n , Q n = A n R n , n = 1, 2, · · · , N Similarly, we can use the tensor network of TT decomposition to represent some loss functions (see figure 31 ). Similar to the multiplication of matrices and vectors, TT decomposition can also be used to simplify the solution for multiplication between large-scale matrices and matrices, and here we omit its solution. Since the outer product calculation is relatively simple, we use the outer product expression of the TT decomposition to simplify the multiplication between the matrix and the vector. Of course, we can also use the TT decomposition expressed in the form of Kronecker or tensor contraction for simplified solution. For more calculations on TT decomposition, please refer to (Kazeev et al. BIB001 ; Lee and Cichocki BIB002 )
A Survey on Tensor Techniques and Applications in Machine Learning <s> FIGURE 33 <s> We present a classical protocol to efficiently simulate any pure-state quantum computation that involves only a restricted amount of entanglement. More generally, we show how to classically simulate pure-state quantum computations on n qubits by using computational resources that grow linearly in n and exponentially in the amount of entanglement in the quantum computer. Our results imply that a necessary condition for an exponential computational speedup (with respect to classical computations) is that the amount of entanglement increases with the size n of the computation, and provide an explicit lower bound on the required growth. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> FIGURE 33 <s> We present a framework using the Quantized Tensor Train (QTT) decomposition to accurately and efficiently solve volume and boundary integral equations in three dimensions. We describe how the QTT decomposition can be used as a hierarchical compression and inversion scheme for matrices arising from the discretization of integral equations. For a broad range of problems, computational and storage costs of the inversion scheme are extremely modest $O(\log N)$ and once the inverse is computed, it can be applied in $O(N \log N)$. We analyze the QTT ranks for hierarchically low rank matrices and discuss its relationship to commonly used hierarchical compression techniques such as FMM and HSS. We prove that the QTT ranks are bounded for translation-invariant systems and argue that this behavior extends to non-translation invariant volume and boundary integrals. For volume integrals, the QTT decomposition provides an efficient direct solver requiring significantly less memory compared to other fast direct solvers. We present results demonstrating the remarkable performance of the QTT-based solver when applied to both translation and non-translation invariant volume integrals in 3D. For boundary integral equations, we demonstrate that using a QTT decomposition to construct preconditioners for a Krylov subspace method leads to an efficient and robust solver with a small memory footprint. We test the QTT preconditioners in the iterative solution of an exterior elliptic boundary value problem (Laplace) formulated as a boundary integral equation in complex, multiply connected geometries. <s> BIB002
. Algorithm based on low rank matrix decomposition for a 4th-order tensor X ∈ R I 1 ×I 2 ×I 3 ×I 4 . First,we perform the mode-n matricization of the tensor X , here we perform the mode-1 matricization for convenience. Then we perform the CR/MCA/LR or other low-rank matrix decomposition methods. Then step by step according to algorithm 7. Note that in the above two methods, we constructed the mode-n matricization of a tensor and then performed matrix decomposition-related operations. The third method, Restricted Tucker-1 decomposition (RT1D), converts the original input tensor into 3rd-order tensor, and then performs Tucker-1 and Tucker-2 decomposition (see figure 34 ). Algorithm 6 SVD-Based TT Algorithm (TT-SVD) BIB001 Input: The Nth-order tensor X ∈ R I 1 ×I 2 ×···I N and accuracy ε Output: Approximate tensor of TT decomposition X , such that A n = U 1 n ; 5: Reshape A n in the manner described in figure 32 , A n = A n .reshape([R n−1 , I n , R n ]); 6: Z n+1 = S n V T n .reshape([R n I n+1 , I n+2 I n+3 · · · I N ]); 7: end for 8: Compute the last core A N = Z N .reshape([R n−1 , I N , 1]); 9: return the core A 1 , A 2 , · · · , A N Algorithm 7 Algorithm Based on Low Rank Matrix Decomposition (LRMD) (Taking CR Decomposition as an Example) BIB002 Input: The Nth-order tensor X ∈ R I 1 ×I 2 ×···I N and accuracy ε Output: Approximate tensor of TT decomposition X , such that X − X F ≤ ε 1: Initialize R 0 = 1, Z 1 = X m1 ; 2: for n=1 to N-1 do 3: [C n , R n ] = CR − decomposition(Z n , ε);
A Survey on Tensor Techniques and Applications in Machine Learning <s> 3) TT TRUNCATION <s> A simple nonrecursive form of the tensor decomposition in $d$ dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on low-rank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 3) TT TRUNCATION <s> The hierarchical Tucker format is a storage-efficient scheme to approximate and represent tensors of possibly high order. This paper presents a Matlab toolbox, along with the underlying methodology and algorithms, which provides a convenient way to work with this format. The toolbox not only allows for the efficient storage and manipulation of tensors but also offers a set of tools for the development of higher-level algorithms. Several examples for the use of the toolbox are given. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 3) TT TRUNCATION <s> Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model which represents data as an ordered network of sub-tensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called tensor network decomposition has been long studied in quantum physics and scientific computing. In this study, we present novel algorithms and applications of tensor network decompositions, with a particular focus on the tensor train decomposition and its variants. The novel algorithms developed for the tensor train decomposition update, in an alternating way, one or several core tensors at each iteration, and exhibit enhanced mathematical tractability and scalability to exceedingly large-scale data tensors. The proposed algorithms are tested in classic paradigms of blind source separation from a single mixture, denoising, and feature extraction, and achieve superior performance over the widely used truncated algorithms for tensor train decomposition. <s> BIB003
In the previous section, we discussed the problem of increased complexity due to increased TT rank. Therefore, if we still use the TT decomposition, we need to adopt some approximate decomposition algorithms to reduce the TT rank. (Oseledets) BIB001 proposed an algorithm called TT Truncation. The algorithm first inputs a tensor with large TT rank. The goal of this algorithm is to find an approximate solution whose rank is much smaller than the original input tensor. For TT Truncation, please refer to algorithm 9 and figure 35 ). BIB003 for a 4th-order tensor X ∈ R I 1 ×I 2 ×I 3 ×I 4 and a 5th-order tensor X ∈ R I 1 ×I 2 ×I 3 ×I 4 ×I 5 . Similar to TT-SVD and LRMD, we first convert the original tensor into a new 3rd-order tensor, next perform the Tucker-1 decomposition, and then follow the algorithm 8 step by step. On the left is a schematic diagram of the 4th-order tensor and on the right is a schematic diagram of the 5th-order tensor. It is noted that the algorithm 9 actually performs the Nth cannonical matricization of the core tensor and then performs a low rank matrix approximation (SVD and QR). We noticed that in the process of calculating the low rank matrix decomposition, the size of matrix will become smaller and smaller because of continuous iterative optimization, so the complexity will be continuously reduced in the process of performing decomposition. By TT Truncation, TT rank can be reduced to the utmost extent and the corresponding approximate tensor can be found, which greatly reduces the computational complexity and improves the efficiency for future data processing, mathematical operations, and so on. Of course, some researchers have developed a similar method for the HT decomposition. For details, please refer to (Kressner and Tobler) BIB002 .
A Survey on Tensor Techniques and Applications in Machine Learning <s> E. BRIEF SUMMARY FOR PART ONE <s> Tensor networks have in recent years emerged as the powerful tools for solving the large-scale optimization problems. One of the most popular tensor network is tensor train (TT) decomposition that acts as the building blocks for the complicated tensor networks. However, the TT decomposition highly depends on permutations of tensor dimensions, due to its strictly sequential multilinear products over latent cores, which leads to difficulties in finding the optimal TT representation. In this paper, we introduce a fundamental tensor decomposition model to represent a large dimensional tensor by a circular multilinear products over a sequence of low dimensional cores, which can be graphically interpreted as a cyclic interconnection of 3rd-order tensors, and thus termed as tensor ring (TR) decomposition. The key advantage of TR model is the circular dimensional permutation invariance which is gained by employing the trace operation and treating the latent cores equivalently. TR model can be viewed as a linear combination of TT decompositions, thus obtaining the powerful and generalized representation abilities. For optimization of latent cores, we present four different algorithms based on the sequential SVDs, ALS scheme, and block-wise ALS techniques. Furthermore, the mathematical properties of TR model are investigated, which shows that the basic multilinear algebra can be performed efficiently by using TR representaions and the classical tensor decompositions can be conveniently transformed into the TR representation. Finally, the experiments on both synthetic signals and real-world datasets were conducted to evaluate the performance of different algorithms. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> E. BRIEF SUMMARY FOR PART ONE <s> In this paper we focus on the problem of completion of multidimensional arrays (also referred to as tensors) from limited sampling. Our approach is based on a recently proposed tensor-Singular Value Decomposition (t-SVD) [1]. Using this factorization one can derive notion of tensor rank, referred to as the tensor tubal rank, which has optimality properties similar to that of matrix rank derived from SVD. As shown in [2] some multidimensional data, such as panning video sequences exhibit low tensor tubal rank and we look at the problem of completing such data under random sampling of the data cube. We show that by solving a convex optimization problem, which minimizes the tensor nuclear norm obtained as the convex relaxation of tensor tubal rank, one can guarantee recovery with overwhelming probability as long as samples in proportion to the degrees of freedom in t-SVD are observed. In this sense our results are order-wise optimal. The conditions under which this result holds are very similar to the incoherency conditions for the matrix completion, albeit we define incoherency under the algebraic set-up of t-SVD. We show the performance of the algorithm on some real data sets and compare it with other existing approaches based on tensor flattening and Tucker decomposition. <s> BIB002
Part one mainly introduced the basic knowledge about tensor, including the definition of tensor, the operation of tensor, and the concept of tensor decomposition. As a new technique, tensor decomposition can reduce the computational complexity and memory by decomposing the tensor into lower-order tensors, matrices, and vectors. At the same time, it can preserve the data structure, effectively reduce the dimension, avoid the curse of dimension problems, and extract the important parts we need from the correlation. At the same time, the biggest feature of tensor decomposition is that the increase of dimension will lead to the non-uniqueness of decomposition. So we usually want to get an approximate solution of it instead of an exact solution, so that don't waste too much computation time and can get a good approximation of the original data. Due to the limited space of this survey, there are some new tensor decompositions that are not covered in detail in this survey, such as t-svd(Zhang and Aeron) BIB002 , tensor ring decomposition(Zhao et al.) BIB001 . The above introduction is several important tensor decompositions in this survey, and has important applications in part two. At the same time, some of these decomposition algorithms have their own advantages or limitations. For CP decomposition, due to its particularity, if a certain constraint condition is imposed on the factor matrices or core tensor, an accurate solution can be obtained. The constraint is mainly determined according to the required environment. The advantage is that it can extract the structured information of the data, which helps better extract and process the required data, and improves the accuracy of the application in the future. For the Tucker decomposition, since the decomposition is general, the solution is usually more, so it is usually considered to impose a constraint term, such as the orthogonal constraint we mentioned above. Then the Tucker decomposition becomes HOSVD decomposition. For HT decomposition, the utility is relatively poor due to the need to determine the binary tree, and most of us use TT decomposition. The biggest advantage of TT decomposition is that only the core tensor is used, and thus we just need to calculate between core tensors. However, as we mentioned earlier, one of the biggest drawbacks of TT decomposition is that if there is no lower TT rank (i.e., there is no low rank TT solution), the computational complexity will be high.
A Survey on Tensor Techniques and Applications in Machine Learning <s> F. VARIOUS TYPES OF DECOMPOSITION APPLICATIONS <s> A simple nonrecursive form of the tensor decomposition in $d$ dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on low-rank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> F. VARIOUS TYPES OF DECOMPOSITION APPLICATIONS <s> Accurate multilingual transfer parsing typically relies on careful feature engineering. In this paper, we propose a hierarchical tensor-based approach for this task. This approach induces a compact feature representation by combining atomic features. However, unlike traditional tensor models, it enables us to incorporate prior knowledge about desired feature interactions, eliminating invalid feature combinations. To this end, we use a hierarchical structure that uses intermediate embeddings to capture desired feature combinations. Algebraically, this hierarchical tensor is equivalent to the sum of traditional tensors with shared components, and thus can be effectively trained with standard online algorithms. In both unsupervised and semi-supervised transfer scenarios, our hierarchical tensor consistently improves UAS and LAS over state-of-theart multilingual transfer parsers and the traditional tensor model across 10 different languages. 1 <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> F. VARIOUS TYPES OF DECOMPOSITION APPLICATIONS <s> Hierarchical tensors can be regarded as a generalisation, preserving many crucial features, of the singular value decomposition to higher-order tensors. For a given tensor product space, a recursive decomposition of the set of coordinates into a dimension tree gives a hierarchy of nested subspaces and corresponding nested bases. The dimensions of these subspaces yield a notion of multilinear rank. This rank tuple, as well as quasi-optimal low-rank approximations by rank truncation, can be obtained by a hierarchical singular value decomposition. For fixed multilinear ranks, the storage and operation complexity of these hierarchical representations scale only linearly in the order of the tensor. As in the matrix case, the set of hierarchical tensors of a given multilinear rank is not a convex set, but forms an open smooth manifold. A number of techniques for the computation of hierarchical low-rank approximations have been developed, including local optimisation techniques on Riemannian manifolds as well as truncated iteration methods, which can be applied for solving high-dimensional partial differential equations. This article gives a survey of these developments. We also discuss applications to problems in uncertainty quantification, to the solution of the electronic Schrodinger equation in the strongly correlated regime, and to the computation of metastable states in molecular dynamics. <s> BIB003 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> F. VARIOUS TYPES OF DECOMPOSITION APPLICATIONS <s> There has recently been considerable interest in completing a low-rank matrix or tensor given only a small fraction (or few linear combinations) of its entries. Related approaches have found considerable success in the area of recommender systems, under machine learning. From a statistical estimation point of view, the gold standard is to have access to the joint probability distribution of all pertinent random variables, from which any desired optimal estimator can be readily derived. In practice high-dimensional joint distributions are very hard to estimate, and only estimates of low-dimensional projections may be available. We show that it is possible to identify higher-order joint PMFs from lower-order marginalized PMFs using coupled low-rank tensor factorization. Our approach features guaranteed identifiability when the full joint PMF is of low-enough rank, and effective approximation otherwise. We provide an algorithmic approach to compute the sought factors, and illustrate the merits of our approach using rating prediction as an example. <s> BIB004 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> F. VARIOUS TYPES OF DECOMPOSITION APPLICATIONS <s> Tensors offer a natural representation for many kinds of data frequently encountered in machine learning. Images, for example, are naturally represented as third order tensors, where the modes correspond to height, width, and channels. In particular, tensor decompositions are noted for their ability to discover multi-dimensional dependencies and produce compact low-rank approximations of data. In this paper, we explore the use of tensor contractions as neural network layers and investigate several ways to apply them to activation tensors. Specifically, we propose the Tensor Contraction Layer (TCL), the first attempt to incorporate tensor contractions as end-to-end trainable neural network layers. Applied to existing networks, TCLs reduce the dimensionality of the activation tensors and thus the number of model parameters. We evaluate the TCL on the task of image recognition, augmenting popular networks (AlexNet, VGG). The resulting models are trainable end-to-end. We evaluate TCL's performance on the task of image recognition, using the CIFAR100 and ImageNet datasets, studying the effect of parameter reduction via tensor contraction on performance. We demonstrate significant model compression without significant impact on the accuracy and, in some cases, improved performance. <s> BIB005 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> F. VARIOUS TYPES OF DECOMPOSITION APPLICATIONS <s> In this work is presented new algorithm, called Truncated Hierarchical SVD (THSVD), aimed at the processing of sequences of correlated images, represented as third-order tensors. The algorithm is based on the multiple calculation of the matrix SVD for elementary tensors (ET) of size 2×2×2, which build the tensor of size N×N×N, when N=2n. The new approach is compared to closest famous hierarchical SVD methods for ET: the Sequential Unfolding SVD (SUSVD) and the Radix 2×2Hierarchical SVD (Radix 2×2 HSVD). New two-level algorithm is developed for ET decomposition, with lower computational complexity than these of Radix 2×2 HSVD and SUSVD. In the paper is presented the THSVD algorithm for tensor of size 4×4×4, which is generalized for a tensor of size N×N×N. Adaptive new algorithm is offered for the “truncation” of the tensor decomposition components with small weights. The multiple execution of similar operations for the SVD calculation for matrices of size 2×2 in each THSVD level, permits its parallel implementation by using processors with relatively simple structures. As a result of the „truncation“ and of the parallel calculations of THSVD, the processing of image sequences represented by third-order tensors, is significantly accelerated. This advantage of the algorithm opens new abilities for its application in real-time image processing systems in various areas: compression of image sequences, digital watermarking, computer vision, machine learning, processing of multidimensional signals, etc. <s> BIB006 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> F. VARIOUS TYPES OF DECOMPOSITION APPLICATIONS <s> In this paper we propose a tensor-based nonlinear model for high-order data classification. The advantages of the proposed scheme are that (i) it significantly reduces the number of weight parameters, and hence of required training samples, and (ii) it retains the spatial structure of the input samples. The proposed model, called Rank-1 FNN, is based on a modification of a feedforward neural network (FNN), such that its weights satisfy the rank-1 canonical decomposition. We also introduce a new learning algorithm to train the model, and we evaluate the Rank-1 FNN on third-order hyperspectral data. Experimental results and comparisons indicate that the proposed model outperforms state of the art classification methods, including deep learning based ones, especially in cases with small numbers of available training samples. <s> BIB007 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> F. VARIOUS TYPES OF DECOMPOSITION APPLICATIONS <s> A novel method for common and individual feature analysis from exceedingly large-scale data is proposed, in order to ensure the tractability of both the computation and storage and thus mitigate the curse of dimensionality, a major bottleneck in modern data science. This is achieved by making use of the inherent redundancy in so-called multi-block data structures, which represent multiple observations of the same phenomenon taken at different times, angles or recording conditions. Upon providing an intrinsic link between the properties of the outer vector product and extracted features in tensor decompositions (TDs), the proposed common and individual information extraction from multi-block data is performed through constraints which impose physical meaning on otherwise unconstrained factorisation approaches. This is shown to dramatically reduce the dimensionality of search spaces in subsequent classification procedures and to yield greatly enhanced accuracy. Simulations on a multi-class classification task of large-scale extraction of individual features from a collection of partially related real-world images demonstrate the advantages of the “blessing of dimensionality” associated with TDs. <s> BIB008
We can find that almost all tensor-based algorithms are inseparable from tensor decomposition because of huge amount of unknown parameters. Therefore, tensor decomposition becomes very important in high-dimensional problems. Algorithm 9 TT Truncation (I.V.Oseledets, 2011) BIB001 Input: The [U n , n , V T n ] = truncated − svd(Y n mc1 , a), 8: find the smallest rank R n−1 such that .reshape([ R n−1 , I n , R n ]); 11: end for 12: Next we will introduce some basic tensor decomposition applications. As described in part two of this survey, we can find that rank-one decomposition can be applied in tensor regression to support the tensor and solve optimization problem with constraint terms. However, since not all tensors can be performed rank-one decomposition, its application has certain limitations. Some results can be seen in recent papers, such as Zhou et al. ' BIB007 . For CP decomposition, the best approximate solution can usually be found even if there are no special constraints on the original tensor or factor (such as orthogonal, independent, sparse, etc.). Therefore, CP decomposition is applied in many tensor-based algorithms. Some results can be seen in recent papers, such as Tresp BIB008 , and Kargas and Sidiropoulos's Completing a joint PMF from projections: A low-rank coupled tensor factorization approach BIB004 . In practice, we tend to impose constraints on the original input tensor or the resulting core tensor. So the application of Tucker decomposition is usually translated into the application of HOSVD decomposition. In fact, HOSVD decomposition is a multidimensional extension of PCA. BIB005 . Because of intuitive tree or chain representations, HT and TT decomposition are used in many places. However, because the tree structure is not necessarily unique, the HT decomposition always has a variety of tree structures. So researchers often extend HT decomposition to a fixed TT decomposition. For the traditional HT decomposition, please refer to Bachmayr et al.'s Tensor networks and hierarchical tensors for the solution of high-dimensional partial differential equations BIB003 , Zhang and Barzilay's Hierarchical low-rank tensors for multilingual transfer parsing BIB002 , and Kountchev and Kountcheva's Truncated Hierarchical SVD for image sequences, represented as third order tensor BIB006 . In recent years, there have been many studies on TT decomposition, especially in terms of properties and algorithms. Here we give some references, such as Kressner and Uschmajew's On lowrank approximability of solutions to high-dimensional operator equations and eigenvalue problems [
A Survey on Tensor Techniques and Applications in Machine Learning <s> A. APPLICATION OF TENSOR IN REGRESSION 1) TENSOR REGRESSION <s> We define the hierarchical singular value decomposition (SVD) for tensors of order $d\geq2$. This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in $d=2$), and we prove these. In particular, one can find low rank (almost) best approximations in a hierarchical format ($\mathcal{H}$-Tucker) which requires only $\mathcal{O}((d-1)k^3+dnk)$ parameters, where $d$ is the order of the tensor, $n$ the size of the modes, and $k$ the (hierarchical) rank. The $\mathcal{H}$-Tucker format is a specialization of the Tucker format and it contains as a special case all (canonical) rank $k$ tensors. Based on this new concept of a hierarchical SVD we present algorithms for hierarchical tensor calculations allowing for a rigorous error analysis. The complexity of the truncation (finding lower rank approximations to hierarchical rank $k$ tensors) is in $\mathcal{O}((d-1)k^4+dnk^2)$ and the attainable accuracy is just 2-3 digits less than machine precision. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> A. APPLICATION OF TENSOR IN REGRESSION 1) TENSOR REGRESSION <s> Classical regression methods treat covariates as a vector and estimate a corresponding vector of regression coefficients. Modern applications in medical imaging generate covariates of more complex form such as multidimensional arrays (tensors). Traditional statistical and computational methods are proving insufficient for analysis of these high-throughput data due to their ultrahigh dimensionality as well as complex structure. In this article, we propose a new family of tensor regression models that efficiently exploit the special structure of tensor covariates. Under this framework, ultrahigh dimensionality is reduced to a manageable level, resulting in efficient estimation and prediction. A fast and highly scalable estimation algorithm is proposed for maximum likelihood estimation and its associated asymptotic properties are studied. Effectiveness of the new methods is demonstrated on both synthetic and real MRI imaging data. Supplementary materials for this article are available online. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> A. APPLICATION OF TENSOR IN REGRESSION 1) TENSOR REGRESSION <s> A fundamental aspect of relational data, such as from a social network, is the possibility of dependence among the relations. In particular, the relations between members of one pair of nodes may have an effect on the relations between members of another pair. This article develops a type of regression model to estimate such effects in the context of longitudinal and multivariate relational data, or other data that can be represented in the form of a tensor. The model is based on a general multilinear tensor regression model, a special case of which is a tensor autoregression model in which the tensor of relations at one time point are parsimoniously regressed on relations from previous time points. This is done via a separable, or Kronecker-structured, regression parameter along with a separable covariance model. In the context of an analysis of longitudinal multivariate relational data, it is shown how the multilinear tensor regression model can represent patterns that often appear in relational and network data, such as reciprocity and transitivity. <s> BIB003 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> A. APPLICATION OF TENSOR IN REGRESSION 1) TENSOR REGRESSION <s> Tensor regression has shown to be advantageous in learning tasks with multi-directional relatedness. Given massive multiway data, traditional methods are often too slow to operate on or suffer from memory bottleneck. In this paper, we introduce subsampled tensor projected gradient to solve the problem. Our algorithm is impressively simple and efficient. It is built upon projected gradient method with fast tensor power iterations, leveraging randomized sketching for further acceleration. Theoretical analysis shows that our algorithm converges to the correct solution in fixed number of iterations. The memory requirement grows linearly with the size of the problem. We demonstrate superior empirical performance on both multi-linear multitask learning and spatio-temporal applications. <s> BIB004
Consider a traditional linear regression model (see figure 36 ): where x ∈ R N is sample feature vector, and w ∈ R N is coefficient vector, b is bias. Regression models are often used to predict, such as stock market forecasts, weather forecasts, etc. When we expand the input x into a tensor, it becomes tensor regression. First let's consider a simple case where the input is a tensor X ∈ R I 1 ×I 2 ···I N and the predicted value y is a scalar. Usually tensor regression has the following expression: where W ∈ R I 1 ×I 2 ···I N is the coefficient vector, and b is the bias. Some researchers will sometimes add a vector-valued covariate c. In general, the solution of tensor regression is to decompose the coefficient tensor and then solve factors by alternating least squares (ALS) method, such as rank-1 decomposition, CP decomposition, Tucker decomposition, TT decomposition, etc. For example, (Zhou et al.) BIB002 proposed the rank-1 and CP decomposition. Then the formula becomes: BIB001 Tensor regression of the Tucker decomposition form is similar. For details, please refer to (Hoff et al. BIB003 ; Yu et al. BIB004 ). The general tensor regression is attributed to solving the following minimization problem: where y i = W • X i + b + a T c represents the predicted value corresponding to the ith tensor sample, X i represents the ith tensor sample, and y i represents the true value of the ith tensor sample. We give the following general algorithm for tensor regression (see algorithm 10).
A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 10 <s> Classical regression methods treat covariates as a vector and estimate a corresponding vector of regression coefficients. Modern applications in medical imaging generate covariates of more complex form such as multidimensional arrays (tensors). Traditional statistical and computational methods are proving insufficient for analysis of these high-throughput data due to their ultrahigh dimensionality as well as complex structure. In this article, we propose a new family of tensor regression models that efficiently exploit the special structure of tensor covariates. Under this framework, ultrahigh dimensionality is reduced to a manageable level, resulting in efficient estimation and prediction. A fast and highly scalable estimation algorithm is proposed for maximum likelihood estimation and its associated asymptotic properties are studied. Effectiveness of the new methods is demonstrated on both synthetic and real MRI imaging data. Supplementary materials for this article are available online. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Algorithm 10 <s> Matrices have become essential data representations for many large-scale problems in data analytics, and hence matrix sketching is a critical task. Although much research has focused on improving the error/size tradeoff under various sketching paradigms, we find a simple heuristic iSVD, with no guarantees, tends to outperform all known approaches. In this paper we adapt the best performing guaranteed algorithm, FrequentDirections, in a way that preserves the guarantees, and nearly matches iSVD in practice. We also demonstrate an adversarial dataset for which iSVD performs quite poorly, but our new technique has almost no error. Finally, we provide easy replication of our studies on APT, a new testbed which makes available not only code and datasets, but also a computing platform with fixed environmental settings. <s> BIB002
Tensor Regression Algorithm (Zhou et al.) BIB001 Input: N Nth-order sample data tensors X i ∈ R I 1 ×I 2 ···I N , i = 1, · · · , N and its true value y i , a vector-valued covariate c.; Output: a, b, W ; 2: Initialize the factor matrices W n for n = 1, · · · , N and core tensor for CP decomposition or initialize the factor vectosr for rank-1 decomposition, other decomposition is similar; 3: while the number of iterations is not reached or there is no convergence do 4: for n=1 to N do 5: solve W n = min W n L(a, b, , W 1 , · · · , W n−1 , W n+1 , · · · , W N ); 6: end for BIB002 : 9: end while according to the expression. So it finally turns to solve the following expression: Here we omit the complicated calculations and give the results directly. It is noted that the test samples are also subject to the Gaussian distribution, and the probability properties of the distribution is accorded to Bayesian conditions. We get: where µ test = k(X test , X) T Tensor variable Gaussian process regression is generally used to deal with noise-bearing and Gaussian-distributed data. It has certain limitations, and this method is computationally expensive. Without using tensor decomposition, the amount of parameter data is very large. Thus, the amount of calculation will also increase exponentially.
A Survey on Tensor Techniques and Applications in Machine Learning <s> 3) GENERALIZED TENSOR REGRESSION <s> A fundamental aspect of relational data, such as from a social network, is the possibility of dependence among the relations. In particular, the relations between members of one pair of nodes may have an effect on the relations between members of another pair. This article develops a type of regression model to estimate such effects in the context of longitudinal and multivariate relational data, or other data that can be represented in the form of a tensor. The model is based on a general multilinear tensor regression model, a special case of which is a tensor autoregression model in which the tensor of relations at one time point are parsimoniously regressed on relations from previous time points. This is done via a separable, or Kronecker-structured, regression parameter along with a separable covariance model. In the context of an analysis of longitudinal multivariate relational data, it is shown how the multilinear tensor regression model can represent patterns that often appear in relational and network data, such as reciprocity and transitivity. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 3) GENERALIZED TENSOR REGRESSION <s> In this work, we introduce a new generalized nonlinear tensor regression framework called kernel-based multiblock tensor partial least squares (KMTPLS) for predicting a set of dependent tensor blocks from a set of independent tensor blocks through the extraction of a small number of common and discriminative latent components. By considering both common and discriminative features, KMTPLS effectively fuses the information from multiple tensorial data sources and unifies the single and multiblock tensor regression scenarios into one general model. Moreover, in contrast to multilinear model, KMTPLS successfully addresses the nonlinear dependencies between multiple response and predictor tensor blocks by combining kernel machines with joint Tucker decomposition, resulting in a significant performance gain in terms of predictability. An efficient learning algorithm for KMTPLS based on sequentially extracting common and discriminative latent vectors is also presented. Finally, to show the effectiveness and advantages of our approach, we test it on the real-life regression task in computer vision, i.e., reconstruction of human pose from multiview video sequences. <s> BIB002
Now we introduce a more general case where both input and output are tensors. We start with a simple second-order matrix. A second-order matrix regression is as follows: where X i ∈ R I 1 ×I 2 , Y i ∈ R J 1 ×J 2 , i = 1, · · · , N are N input sample matrices and corresponding output sample matrices. A ∈ R J 1 ×I 1 and B ∈ R J 2 ×I 2 are unknown coefficient matrices. E ∈ R J 1 ×J 2 is a noise matrix with mean-zero. (Hoff) BIB001 used the residual mean squared error to measure the error between the true value and the prediction value: By deriving the above formula, we finally get: Similarly, we can get A and B respectively by alternating least squares. We further extend to generalized tensor regression as follows: where W n ∈ R J n ×I n are coefficient matrices (factor matrices) and Note that there is a property between the mode-n product and the Kronecker product, as follows: Therefore, we only need to adopt the mode-n matricization on both sides of the formula 89 to get the solution: Y mn = W n X mn + E mn BIB002 where mat( X ) n = mat(X ) n (W N ⊗ R · · · ⊗ W n+1 ⊗ R W n−1 · · · W 1 ) T . Then through formula 88 we finally get: Finally, we give the specific algorithm of the whole generalized tensor regression (see algorithm 11).
A Survey on Tensor Techniques and Applications in Machine Learning <s> B. APPLICATION OF TENSOR IN CLASSIFICATION 1) SUPPORT TENSOR MACHINE(STM) APPLICATION IN IMAGES CLASSFICATION a: THE SUPPORT VECTOR MACHINE(SVM) <s> Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> B. APPLICATION OF TENSOR IN CLASSIFICATION 1) SUPPORT TENSOR MACHINE(STM) APPLICATION IN IMAGES CLASSFICATION a: THE SUPPORT VECTOR MACHINE(SVM) <s> Least Squares Support Vector Machine (LS-SVM) and Twin Support Vector Machine (TSVM) are efiective learning methods for classiflcation based on Support Vector Machine which has been widely used in many aspects and received extensive attention by academic community. At present, data representation is one of the core problems in machine learning. In practice, many objects are naturally represented by tensors. In this paper, we propose Least Squares Twin Support Tensor Machine (LSTSTM) which based on tensor data. We use two non-parallel hyperplanes and least squares idea for classiflcation, which is difierent form Support Tensor Machine (STM). LS-TSTM combines the characteristics of many learning machines. It makes full use of the structural information of the data, and has the features of less computation cost and higher precision. The numerical experiments of the two-class classiflcation problem for tensor data show that LS-TSTM has its advantages compared with other learning machines. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> B. APPLICATION OF TENSOR IN CLASSIFICATION 1) SUPPORT TENSOR MACHINE(STM) APPLICATION IN IMAGES CLASSFICATION a: THE SUPPORT VECTOR MACHINE(SVM) <s> Tensor networks have in recent years emerged as the powerful tools for solving the large-scale optimization problems. One of the most popular tensor network is tensor train (TT) decomposition that acts as the building blocks for the complicated tensor networks. However, the TT decomposition highly depends on permutations of tensor dimensions, due to its strictly sequential multilinear products over latent cores, which leads to difficulties in finding the optimal TT representation. In this paper, we introduce a fundamental tensor decomposition model to represent a large dimensional tensor by a circular multilinear products over a sequence of low dimensional cores, which can be graphically interpreted as a cyclic interconnection of 3rd-order tensors, and thus termed as tensor ring (TR) decomposition. The key advantage of TR model is the circular dimensional permutation invariance which is gained by employing the trace operation and treating the latent cores equivalently. TR model can be viewed as a linear combination of TT decompositions, thus obtaining the powerful and generalized representation abilities. For optimization of latent cores, we present four different algorithms based on the sequential SVDs, ALS scheme, and block-wise ALS techniques. Furthermore, the mathematical properties of TR model are investigated, which shows that the basic multilinear algebra can be performed efficiently by using TR representaions and the classical tensor decompositions can be conveniently transformed into the TR representation. Finally, the experiments on both synthetic signals and real-world datasets were conducted to evaluate the performance of different algorithms. <s> BIB003
First, we briefly review the concept of support vector machine(SVM). The SVM is first proposed by (Corts and Vapnik) BIB001 to find a hyperplane to distinguish between the two different categories, what we usually call the binary classifier (see figure 37 ). As can be seen from figure 37 , the purpose of the SVM is to find a hyperplane w T x + b = 0, x = [x 1 , x 2 , · · · , x m ] to distinguish between the two classes. We give the two types of labels +1 and −1 respectively. Where the distance from a point x to the hyperplane in the sample space is: As shown in figure 37 , the point closest to the hyperplane is called the support vector, and the sum of the distances of the two heterogeneous support vectors to the hyperplane is: A simple schematic of a linear SVM. As shown, the input is a first-order tensor (vector) and the size is 2. It is also called the margin. In order to find the hyperplane with the largest interval, it is converted to solve the following optimization problem: where w is the two norm of the vector w. In fact, the training samples are linearly inseparable in many cases, which is called the soft interval. The general constraint formula for the SVM is shown as follows: where ξ j = l(y i (w T x i + b) − 1) is called slack variables. l is a loss function. There are three commonly used loss functions hinge loss : l hinge (x) = max(0, 1 − x); exponential loss : l exp (x) = exp(−x); logistic loss : l log (x) = log(1 + exp(−x)); BIB003 Later researchers (Zhao et al.) BIB002 converted the above constraints into the following formula: where ξ = [ξ 1 , ξ 2 , · · · , ξ M ] ∈ R M . Note that formula 110 has two major differences compared to formula 108. 1: in order to facilitate the calculation, the above constraint is changed from inequality to equality. 2: the loss function in formula 110 is the mean square loss. The benefit of this modification is that the solution will be easier. Generally, the solution is developed by Lagrangian multiplier method. We do not repeated derivation here. For details, please refer to (Corts and Vapnik) BIB001 .
A Survey on Tensor Techniques and Applications in Machine Learning <s> b: THE SUPPORT MATRIX MACHINE(SMM) <s> Alternating direction methods are a common tool for general mathematical programming and optimization. These methods have become particularly important in the field of variational image processing, which frequently requires the minimization of nondifferentiable objectives. This paper considers accelerated (i.e., fast) variants of two common alternating direction methods: the alternating direction method of multipliers (ADMM) and the alternating minimization algorithm (AMA). The proposed acceleration is of the form first proposed by Nesterov for gradient descent methods. In the case that the objective function is strongly convex, global convergence bounds are provided for both classical and accelerated variants of the methods. Numerical examples are presented to demonstrate the superior performance of the fast methods for a wide variety of problems. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> b: THE SUPPORT MATRIX MACHINE(SMM) <s> In many classification problems such as electroencephalogram (EEG) classification and image classification, the input features are naturally represented as matrices rather than vectors or scalars. In general, the structure information of the original feature matrix is useful and informative for data analysis tasks such as classification. One typical structure information is the correlation between columns or rows in the feature matrix. To leverage this kind of structure information, we propose a new classification method that we call support matrix machine (SMM). Specifically, SMM is defined as a hinge loss plus a so-called spectral elastic net penalty which is a spectral extension of the conventional elastic net over a matrix. The spectral elastic net enjoys a property of grouping effect, i.e., strongly correlated columns or rows tend to be selected altogether or not. Since the optimization problem for SMM is convex, this encourages us to devise an alternating direction method of multipliers (ADMM) algorithm for solving the problem. Experimental results on EEG and image classification data show that our model is more robust and efficient than the state-of-the-art methods. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> b: THE SUPPORT MATRIX MACHINE(SMM) <s> Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images independently. However, learning multidimensional dictionaries and sparse codes for the reconstruction of multi-dimensional data is very important, as it examines correlations among all the data jointly. This provides more capacity for the learned dictionaries to better reconstruct data. In this paper, we propose a generic and novel formulation for the CSC problem that can handle an arbitrary order tensor of data. Backed with experimental results, our proposed formulation can not only tackle applications that are not possible with standard CSC solvers, including colored video reconstruction (5D- tensors), but it also performs favorably in reconstruction with much fewer parameters as compared to naive extensions of standard CSC to multiple features/channels. <s> BIB003
If we extend the input sample from vector to secondorder tensor (matrix), we will get the Support Matrix Machine(SMM). (Luo) BIB002 proposed the concept of the Support Matrix Machine. We consider a matrix sample X a ∈ R I ×J , a = 1, 2, · · · , m. The hinge loss function are replaced in SMM. The following constraint formula is obtained: where W * (we usually call it the nuclear norm) represents the sum of all singular values of the matrix W, C and λ are coefficient. In fact, we get the following properties after performing the mode-1 vectorization of the matrix w = vec(W T ) BIB003 . Substituting the formula 136 into the formula 135 returns the constraint expression of the original SVM. Note that in order to protect the data structure from being destroyed, we generally do not perform the mode-n vectorization of the matrix and convert it into a traditional SVM. So we give the optimization problem directly in the form of a matrix. According to (Goldstein et al.) BIB001 , they further converted the above constraints into the following augmented Lagrangian function form: Due to the complexity of the SMM solution, please refer to (Luo et al.) BIB002 for details.
A Survey on Tensor Techniques and Applications in Machine Learning <s> c: THE SUPPORT TENSOR MACHINE(STM) <s> This paper aims to take general tensors as inputs for supervised learning. A supervised tensor learning (STL) framework is established for convex optimization based learning techniques such as support vector machines (SVM) and minimax probability machines (MPM). Within the STL framework, many conventional learning machines can be generalized to take n/sup th/-order tensors as inputs. We also study the applications of tensors to learning machine design and feature extraction by linear discriminant analysis (LDA). Our method for tensor based feature extraction is named the tenor rank-one discriminant analysis (TR1DA). These generalized algorithms have several advantages: 1) reduce the curse of dimension problem in machine learning and data mining; 2) avoid the failure to converge; and 3) achieve better separation between the different categories of samples. As an example, we generalize MPM to its STL version, which is named the tensor MPM (TMPM). TMPM learns a series of tensor projections iteratively. It is then evaluated against the original MPM. Our experiments on a binary classification problem show that TMPM significantly outperforms the original MPM. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> c: THE SUPPORT TENSOR MACHINE(STM) <s> This work addresses the two class classification problem within the tensor-based large margin classification paradigm. To this end, we formulate the higher rank Support Tensor Machines (STMs), in which the parameters defining the separating hyperplane form a tensor (tensorplane) that is constrained to be the sum of rank one tensors. Subsequently, we propose two extensions in which the separating tensorplanes take into consideration the spread of the training data along the different tensor modes. More specifically, we first propose the higher rank @S/@S"w STMs that use the total or the within-class covariance matrix in order to whiten the data and thus provide invariance to affine transformations. Second, we propose the higher rank Relative Margin Support Tensor Machines (RMSTMs) that bound from above the distance of the data samples from the separating tensorplane while maximizing the margin from it. The corresponding optimization problem is solved in an iterative manner utilizing the CANDECOMP/PARAFAC (CP) decomposition, where at each iteration the parameters corresponding to the projections along a single tensor mode are estimated by solving a typical Support Vector Machine (SVM)-type optimization problem. The efficiency of the proposed method is illustrated on the problems of gait and action recognition where we report results that improve, in some cases considerably, the state of the art. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> c: THE SUPPORT TENSOR MACHINE(STM) <s> There has been growing interest in developing more effective learning machines for tensor classification. At present, most of the existing learning machines, such as support tensor machine (STM), involve nonconvex optimization problems and need to resort to iterative techniques. Obviously, it is very time-consuming and may suffer from local minima. In order to overcome these two shortcomings, in this paper, we present a novel linear support higher-order tensor machine (SHTM) which integrates the merits of linear C-support vector machine (C-SVM) and tensor rank-one decomposition. Theoretically, SHTM is an extension of the linear C-SVM to tensor patterns. When the input patterns are vectors, SHTM degenerates into the standard C-SVM. A set of experiments is conducted on nine second-order face recognition datasets and three third-order gait recognition datasets to illustrate the performance of the proposed SHTM. The statistic test shows that compared with STM and C-SVM with the RBF kernel, SHTM provides significant performance gain in terms of test accuracy and training speed, especially in the case of higher-order tensors. <s> BIB003 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> c: THE SUPPORT TENSOR MACHINE(STM) <s> Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images independently. However, learning multidimensional dictionaries and sparse codes for the reconstruction of multi-dimensional data is very important, as it examines correlations among all the data jointly. This provides more capacity for the learned dictionaries to better reconstruct data. In this paper, we propose a generic and novel formulation for the CSC problem that can handle an arbitrary order tensor of data. Backed with experimental results, our proposed formulation can not only tackle applications that are not possible with standard CSC solvers, including colored video reconstruction (5D- tensors), but it also performs favorably in reconstruction with much fewer parameters as compared to naive extensions of standard CSC to multiple features/channels. <s> BIB004 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> c: THE SUPPORT TENSOR MACHINE(STM) <s> Tensor representation plays a very important role in high-dimensional problems, like in the emerging big data applications. The numerical treatment of high-dimensional problems is difficult because of the curse of dimensionality, i.e., the number of elements and the computational complexity increase exponentially with the data dimension. To break the curse of dimensionality, finding a low-dimensional representations of the original data is always important in practical applications, especially in big data applications. Tensor representation and decomposition are emerging and promising tools for big data analysis and data mining. In this paper, we exploit the technique of tensor train decomposition which is one of the most effective and stable representations of high dimensional tensors to learn a low-dimensional representation of original data. By combining tensor train decomposition with the support vector machine, we proposed a support vector machine based on low-rank tensor decomposition for big data applications. The proposed method possesses high classification performance and has significantly reduced computational complexity. <s> BIB005
If we further extend the matrix to tensor, we will get the Support Tensor Machine(STM). In general, STM currently have five constraint expressions, we first give the original constraint expression: Here we usually choose to decompose the coefficient tensor W , and the researchers give four solutions in total. (Tao et al.) BIB001 proposed to decompose the coefficient tensor into the form of the rank-one vector outer product, i.e., W = w BIB004 • w 2 • · · · w N (see formula 28). (Kotsia et al.) BIB002 performed CP decomposition on the coefficient tensor, i.e., W = R r=1 λ r w BIB004 • w 2 • · · · w N (see formula 29). (Kotsia and Patras) performed Tucker decomposition on the coefficient tensor, i.e., W = A × 1m W 1 × 2m W 2 · · · × Nm W N (see formula 108). (Wang et al.) BIB005 performed TT decomposition on the coefficient tensor, i.e., W = W 1 × 3,1 W 2 · · · × 3,1 W N (see formula 55). Substituting these three decompositions will result in three forms of STM. In general, the solution of STM is similar to the solution of CP decomposition. The central idea is based on the alternating least squares method, that is, N-1 other optimization items are fixed first, and only one item is updated at a time. For example, if we use the form of the rank-one decomposition for coefficient tensor, then the constraint expression becomes as follows (see algorithm 12): where α = • N i=1,i =m w i 2 . Then the label of a test sample, X test , can be predicted as follows: However, the above-mentioned alternating least squares iteration method usually needs a lot of time and computational memory, and only obtian a local optimal solution. So many researchers proposed other algorithms. Algorithm 12 Support Tensor Machine (Hao et al.) BIB003 Input: Input tensor sample sets X j ∈ R I 1 ×I 2 ···I N , j = 1, 2 · · · , M and label y j ∈ {+1, −1}; Output: w i , i = 1, · · · , N and b; 1: if the required number of iterations is not reached then 2: for m=1 to N do 3: Initialize w 1 , w 2 , · · · , w m−1 , w m+1 , · · · , w N 4: Calculate w m by solving the binary optimization problem of the formula 139.
A Survey on Tensor Techniques and Applications in Machine Learning <s> 6: <s> There has been growing interest in developing more effective learning machines for tensor classification. At present, most of the existing learning machines, such as support tensor machine (STM), involve nonconvex optimization problems and need to resort to iterative techniques. Obviously, it is very time-consuming and may suffer from local minima. In order to overcome these two shortcomings, in this paper, we present a novel linear support higher-order tensor machine (SHTM) which integrates the merits of linear C-support vector machine (C-SVM) and tensor rank-one decomposition. Theoretically, SHTM is an extension of the linear C-SVM to tensor patterns. When the input patterns are vectors, SHTM degenerates into the standard C-SVM. A set of experiments is conducted on nine second-order face recognition datasets and three third-order gait recognition datasets to illustrate the performance of the proposed SHTM. The statistic test shows that compared with STM and C-SVM with the RBF kernel, SHTM provides significant performance gain in terms of test accuracy and training speed, especially in the case of higher-order tensors. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 6: <s> In the field of target detection in remote sensing images, lots of learning algorithms have been presented, among which support vector machine was widely utilized. However, this kind of vector represents only one pixel of a remote sensing image that ignores the spatial relationship of neighbors. Besides, with the increase of spatial resolution of remote sensing images, detail detection of targets become possible so that we gain more detailed information. Higher resolution leads to larger data volume, nevertheless, which makes processing efficiency decrease. In order to improve the situation, we present a Hierarchical Support Tensor Machine (H-STM) method, which deal images with feature tensors that remains much spatial structural information, and according to requirement, we detect if targets exist in low-resolution images with low-level STM first. While got desired result, we make use of a higher level to find out more detail information. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 6: <s> It is difficult to establish accurate mathematical models to describe the range extender electric vehicles due to the non-stationary, non-linear and interconnection of the monitoring signal sources resulted from the massive moving parts and complex architecture in range-extender. And the support vector machine (SVM) and other algorithms would lead to the destruction of the natural structure and the correlation in the original data. In order to solve the above problems, Support Higher-order Tensor Machine (SHTM) is proposed. In order to verify the feasibility of SHTM in the fault diagnosis of electric vehicle extender, firstly, the GT-Crank lattice dynamics model was used to establish the samples of normal, single cylinder fire failure and misalignment failure, and then SHTM model was used to classify the samples. The results show that SHTM makes full use of the structural information and correlation in the engine state parameters, the test accuracy is high, the learning time is short, and the convergence speed is faster. <s> BIB003
end for 7: end if (Z. BIB001 BIB001 proposed to transform the formula 114 into the following constraint expression: where α j are the Lagrange multipliers. Note that if the input tensor becomes a vector, formula 117 will become the dual problem of the standard SVM. STM has gradually entered the field of machine learning due to its ability to preserve data structures and improve performance. STM with different constraints have separate application scenarios. For example, STM based on CP decomposition is applied to pedestrian detection of thermal infrared rays in order to find pedestrians in a group of images for precise positioning (Biswas and Milanfar) . STM based on rank-one decomposition is applied to high-resolution remote sensing image target detection (Chen et al.) BIB002 . STM based on the original dual problem solving algorithm is applied to the fault diagnosis of electric vehicle range finder (Xu et al.) BIB003 .
A Survey on Tensor Techniques and Applications in Machine Learning <s> 2) HIGH-ORDER RESTRICTED BOLTZMANN MACHINES (HORBM) FOR CLASSIFICATION <s> Restricted Boltzmann Machines (RBMs) are an important class of latent variable models for representing vector data. An under-explored area is multimode data, where each data point is a matrix or a tensor. Standard RBMs applying to such data would require vectorizing matrices and tensors, thus resulting in unnecessarily high dimensionality and at the same time, destroying the inherent higher-order interaction structures. This paper introduces Tensor-variate Restricted Boltzmann Machines (TvRBMs) which generalize RBMs to capture the multiplicative interaction between data modes and the latent variables. TvRBMs are highly compact in that the number of free parameters grows only linear with the number of modes. We demonstrate the capacity of TvRBMs on three real-world applications: handwritten digit classification, face recognition and EEG-based alcoholic diagnosis. The learnt features of the model are more discriminative than the rivals, resulting in better classification performance. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 2) HIGH-ORDER RESTRICTED BOLTZMANN MACHINES (HORBM) FOR CLASSIFICATION <s> Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance. <s> BIB002
We first review the concept of Restricted Boltzmann Machines. RBM is a random neural network, which can be used for algorithm modeling of dimensionality reduction, classification, collaborative filtering, etc. In RBM, it contains two layers, including visible layer and hidden layer (see figure 38 ). Where the visible layer is, derived from the following formula: where σ (x) = 1 1+e −x is the activation function. Then we carry out the back propagation algorithm, which recalculates the value of the visible layer as the input of the hidden layer's value: When the back-propagation recalculated visible layer value is not equal to the original visible layer value, the operation is repeated, which is the training process of the restricted Boltzmann machine. KL divergence is usually used in a Restricted Boltzmann Machine to measure the distance between the distributions of these two variables. RBM is a probability distribution model based on energy, as follows: Then we derive the joint probability distribution of the hidden layer variable y and the visible layer variable x: where Z = x,y . In order to make the distribution of these two values as close as possible to maximize the likelihood function of the input samples: We assume that there are N input samples, x t ∈ R M , t ∈ [1, N ] in visible layer. Since the derivative form of the above formula cannot be solved generally, the deep learning pioneer Hinton proposed the CD algorithm (i.e.,k times Gibbs sampling) to obtain an approximate solution. Here we give a very simple update formula based on the actual application. In practice, it usually takes only one sample to achieve very accurate results, so the updated formula is as follows: where α ∈ [0, 1] is the learning rate, x 1 is the updated value of the visible layer variable x obtained by the first backpropagation of the hidden layer y, and y 1 is the first update value of the hidden layer obtained by x 1 forward propagation again. If it is k(k > 1) times, we only need to change the x 1 of the above formula to x k (the value of the visible layer variable obtained by the kth back-propagation). For details, please refer to (Hinton) . If we increase the number of layers, the traditional RBM will become a higher dimension, which we call High-order restricted Boltzmann machines (HORBM). For example, for three sets of variables, a ∈ R I , b ∈ R J , c ∈ R K , the energy function can be represented (see figure 39 ): where a ∈ R I and b ∈ R J are two input variables, which can be understood as two visible layers, c ∈ R K is a hidden layer variable, and d, e, f correspond to the biases of three variables. Note that the input of the visible, hidden layer or the additional layer of the RBM is a vector. If the input becomes a tensor, we call it Tensor-variate Restricted Boltzmann machines (TvRBMs) (Nguyen et al.) BIB001 . We assume that the visible layer variable is, X ∈ R I 1 ×I 2 ···×I N , and the hidden layer variable is, y ∈ R J , so the weight tensor is W ∈ R I 1 ×I 2 ···×I N ×J . Then the energy function can be similarly expressed as follows: where A ∈ R I 1 ×I 2 ···×I N , b ∈ R J are the biases of the visible and hidden layers, respectively. And similarly, the hidden layer variable y = [y 1 , · · · , y J ] T can be expressed as: A major problem is that as the input tensor dimension increases, the weight tensor elements will multiply. We usually use low rank tensor decomposition to solve the problem. For example, if we perform CP decomposition on weight tensors: where W n ∈ R I n ×R , n = 1, · · · , N , W N +1 ∈ R J ×R are factor matrices, and is the diagonal tensor. Then the number of elements is reduced from the original J N n=1 I n to R(J + N n=1 I n + 1). More simply, if the weight tensor can be expressed in the form of a rank-one vector outer product: where w n ∈ R I n , n = 1, · · · , N , w N +1 ∈ R J . Then the number of elements is reduced from the original J N n=1 I n to J + N n=1 I n . Finally, we introduce a latent conditional high-order Boltzmann machines(CHBM). (Huang et al.) BIB002 proposed latent conditional high-order Boltzmann machine for classification. The algorithm is similar to the high-order Boltzmann machine of the three sets of variables we just mentioned. However, in CHBM, input data are two N sample features x i ∈ R I , y i ∈ R J , i = 1, · · · , N and z is the relationship label of x i , y i where z = [z 1 , z 2 ]. For each sample, if x and y are matched, z = [1, 0], else z = [0, 1] (''one-hot'' encoding). Then the author adds another set of binary-valued latent variables to the hidden layer. The entire structure is shown in figure 40 . Where h denotes the intrinsic relationship between x and y. h and z are connected by a weight matrix U. Then its energy function is as follows: where a, b, , c , d are the biases of x, y, h, z, respectively. Then the value of z t , t = {1, 2} (which is also known as activation conditional probability) is : In fact, the model is a two-layer RBM. The first layer is ternary RBM (x, y, h) , and the second layer is the traditional binary RBM (h, z). For the 3rd-order tensor W of the first layer, we can use the CP decomposition to solve.
A Survey on Tensor Techniques and Applications in Machine Learning <s> Example 2: <s> Matrix factorizations and their extensions to tensor factorizations and decompositions have become prominent techniques for linear and multilinear blind source separation (BSS), especially multiway Independent Component Analysis (ICA), NonnegativeMatrix and Tensor Factorization (NMF/NTF), Smooth Component Analysis (SmoCA) and Sparse Component Analysis (SCA). Moreover, tensor decompositions have many other potential applications beyond multilinear BSS, especially feature extraction, classification, dimensionality reduction and multiway clustering. In this paper, we briefly overview new and emerging models and approaches for tensor decompositions in applications to group and linked multiway BSS/ICA, feature extraction, classification andMultiway Partial Least Squares (MPLS) regression problems. Keywords: Multilinear BSS, linked multiway BSS/ICA, tensor factorizations and decompositions, constrained Tucker and CP models, Penalized Tensor Decompositions (PTD), feature extraction, classification, multiway PLS and CCA. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Example 2: <s> Modern applications in engineering and data science are increasinglybased on multidimensional data of exceedingly high volume, variety,and structural richness. However, standard machine learning algorithmstypically scale exponentially with data volume and complexityof cross-modal couplings - the so called curse of dimensionality -which is prohibitive to the analysis of large-scale, multi-modal andmulti-relational datasets. Given that such data are often efficientlyrepresented as multiway arrays or tensors, it is therefore timely andvaluable for the multidisciplinary machine learning and data analyticcommunities to review low-rank tensor decompositions and tensor networksas emerging tools for dimensionality reduction and large scaleoptimization problems. Our particular emphasis is on elucidating that,by virtue of the underlying low-rank approximations, tensor networkshave the ability to alleviate the curse of dimensionality in a numberof applied areas. In Part 1 of this monograph we provide innovativesolutions to low-rank tensor network decompositions and easy to interpretgraphical representations of the mathematical operations ontensor networks. Such a conceptual insight allows for seamless migrationof ideas from the flat-view matrices to tensor network operationsand vice versa, and provides a platform for further developments, practicalapplications, and non-Euclidean extensions. It also permits theintroduction of various tensor network operations without an explicitnotion of mathematical expressions, which may be beneficial for manyresearch communities that do not directly rely on multilinear algebra.Our focus is on the Tucker and tensor train TT decompositions andtheir extensions, and on demonstrating the ability of tensor networksto provide linearly or even super-linearly e.g., logarithmically scalablesolutions, as illustrated in detail in Part 2 of this monograph. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Example 2: <s> Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images independently. However, learning multidimensional dictionaries and sparse codes for the reconstruction of multi-dimensional data is very important, as it examines correlations among all the data jointly. This provides more capacity for the learned dictionaries to better reconstruct data. In this paper, we propose a generic and novel formulation for the CSC problem that can handle an arbitrary order tensor of data. Backed with experimental results, our proposed formulation can not only tackle applications that are not possible with standard CSC solvers, including colored video reconstruction (5D- tensors), but it also performs favorably in reconstruction with much fewer parameters as compared to naive extensions of standard CSC to multiple features/channels. <s> BIB003 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> Example 2: <s> In pattern classification, polynomial classifiers are well-studied methods as they are capable of generating complex decision surfaces. Unfortunately, the use of multivariate polynomials is limited to kernels as in support-vector machines, because polynomials quickly become impractical for high-dimensional problems. In this paper, we effectively overcome the curse of dimensionality by employing the tensor train (TT) format to represent a polynomial classifier. Based on the structure of TTs, two learning algorithms are proposed, which involve solving different optimization problems of low computational complexity. Furthermore, we show how both regularization to prevent overfitting and parallelization, which enables the use of large training sets, are incorporated into these methods. The efficiency and efficacy of our tensor-based polynomial classifier are then demonstrated on the two popular data sets U.S. Postal Service and Modified NIST. <s> BIB004
For the polynomial f in example 1, since n=(1,3,2), then v(x BIB003 and v(x BIB002 The nonzero elements of the coefficient tensor A ∈ R 2×4×3 are a 111 = 1, a 211 = 1, a 141 = 3, a 112 = 2, a 113 = 4, a 123 = −2, a 222 = −5. We combine the indices of the three Vandermonde vectors to get the indices of A, such as, −5x 1 x 2 x 3 is from v(x BIB003 Given a set of N training samples, (x i , y i ), i = 1, 2, · · · , N , x i ∈ R m . After feature extraction, each feature is mapped to high-dimensional space by mapping T: Therefore, formula 139 is further equivalent to: Example 3: Here we consider the example of a binary polynomial for the sake of simplicity. Assuming f = 2+3x 1 − 2 ) T , then according to formula 9 and 17, both T (x)and A are 2rd-order tensors(matrices): Similar to the idea of SVM, polynomial classification is looking for a hyperplane to distinguish between these two types of examples. Its ultimate goal is to find the coefficient tensor A so that: Considering the TT decomposition of the coefficient tensor A, A = A 1 × 3,1 A 2 · · ·× 3,1 A m , the above polynomial equation (formula 134) has the following further properties: where p BIB003 BIB001 means the mode-2 vectorization of the tensor (see formula 15). Example 4: For the polynomial f in example 3, according to formula 137, T (x)•A = (q 2 (x) T ⊗ L v(x 2 ) T ⊗ L p 2 (x))vec(A 2 ) 2 , let i = 2. Then we will get: Chen et al.) BIB004 proposed two loss functions, least squares loss and logistic loss function: where the first formula is the least squares loss function, the second is the logical loss function, and According to formula 137, the least squares loss function of formula 139 can be further transformed into: where and C j [i] means the ith term of vector C j [i], y = [y 1 , y 2 , · · · , y N ] T . If we further add a regular term, the final loss function is: Finally, it is transformed into solving the loss function problem of minimizing the tensor A with TT format. In fact, the idea of optimization is still similar to alternating least squares, which we call the improved alternating least squares method. The central idea is to update only one core A n in each iteration while keeping other cores unchanged. In general, we first update from A 1 to A n , so that the left half is updated, which we call forward half-sweep. Then we update from A n to A 1 , and the right half is updated, which we call backward half-sweep. When both forward and backward are completed, an iteration is completed (see figure 41 and algorithm 13).
A Survey on Tensor Techniques and Applications in Machine Learning <s> 5) TENSOR-BASED FEATURE FUSION FOR FACE RECOGNITION <s> Deep learning has achieved great success in face recognition, however deep-learned features still have limited invariance to strong intra-personal variations such as large pose changes. It is observed that some facial attributes (e.g. eyebrow thickness, gender) are robust to such variations. We present the first work to systematically explore how the fusion of face recognition features (FRF) and facial attribute features (FAF) can enhance face recognition performance in various challenging scenarios. Despite the promise of FAF, we find that in practice existing fusion methods fail to leverage FAF to boost face recognition performance in some challenging scenarios. Thus, we develop a powerful tensor-based framework which formulates feature fusion as a tensor optimisation problem. It is nontrivial to directly optimise this tensor due to the large number of parameters to optimise. To solve this problem, we establish a theoretical equivalence between low-rank tensor optimisation and a two-stream gated neural network. This equivalence allows tractable learning using standard neural network optimisation tools, leading to accurate and stable optimisation. Experimental results show the fused feature works better than individual features, thus proving for the first time that facial attributes aid face recognition. We achieve state-of-the-art performance on three popular databases: MultiPIE (cross pose, lighting and expression), CASIA NIR-VIS2.0 (cross-modality environment) and LFW (uncontrolled environment). <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 5) TENSOR-BASED FEATURE FUSION FOR FACE RECOGNITION <s> In pattern classification, polynomial classifiers are well-studied methods as they are capable of generating complex decision surfaces. Unfortunately, the use of multivariate polynomials is limited to kernels as in support-vector machines, because polynomials quickly become impractical for high-dimensional problems. In this paper, we effectively overcome the curse of dimensionality by employing the tensor train (TT) format to represent a polynomial classifier. Based on the structure of TTs, two learning algorithms are proposed, which involve solving different optimization problems of low computational complexity. Furthermore, we show how both regularization to prevent overfitting and parallelization, which enables the use of large training sets, are incorporated into these methods. The efficiency and efficacy of our tensor-based polynomial classifier are then demonstrated on the two popular data sets U.S. Postal Service and Modified NIST. <s> BIB002 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 5) TENSOR-BASED FEATURE FUSION FOR FACE RECOGNITION <s> Detecting layout hotspots is a key step in the physical verification flow. Although machine learning solutions show benefits over lithography simulation and pattern matching-based methods, it is still hard to select a proper model for large scale problems and inevitably, performance degradation occurs. To overcome these issues, in this paper we develop a deep learning framework for high performance and large scale hotspot detection. First, we use feature tensor generation to extract representative layout features that fit well with convolutional neural networks while keeping the spatial relationship of the original layout pattern with minimal information loss. Second, we propose a biased learning algorithm to train the convolutional neural network to further improve detection accuracy with small false alarm penalties. In addition, to simplify the training procedure and seek a better trade-off between accuracy and false alarms, we extend the original biased learning to a batch biased learning algorithm. Experimental results show that our framework outperforms previous machine learning-based hotspot detectors in both ICCAD 2012 Contest benchmarks and large scale industrial benchmarks. Source code and trained models are available at https://github.com/phdyang007/dlhsd. <s> BIB003
In general, traditional face recognition has only a single input x, and the output expression is as follows: But (Hu et al.) BIB001 proposed to combine the face attribute feature and the face recognition feature, which is simply adding an input z. Then their output model becomes as FIGURE 41. The improved least squares method for TT decomposition BIB002 . First forward half sweep, then backward half sweep, forward and backward half sweep is an iteration. The forward and backward half sweeps are all completed, then an iterative update is completed. After each update of the nuclear tensor, the green matrix R generated by QR decomposition of the nuclear tensor is absorbed into the adjacent matrix, and then continues to update the adjacent matrix. follows: where the bias is omitted, · · · , N and weight tensor W ∈ R A×C×B . The goal is still to optimize the loss function between the predicted and true values. The author used Tucker decomposition Algorithm 14 Feature Tensor Generation (Yang et al.) BIB003 Input: The original image X ∈ R N ×N ; Output: Feature tensor, which we call F t ; 1: First, the original input image X is divided into n×n block regions, and then multi-level perception is performed to find the feature representation of each block region, we call the block area Y a,b (a, b = 1, 2, · · · , n) ; 2: Perform DCT transform for each block: , · · · , D a,b (K , K )]; 4: Pick first k elements of each C a,b , C ab = C a,b [0 : k]; 5: Finally, all these element groups will become a feature Then formula 144 becomes as follows: According to some properties between the formula 169, it can be converted into: According to the nature of Kronecker, it can be further transformed into: 2 is called fused feature. The entire classification process is shown in figure 43 . Finally, the entire training process is actually the process of solving the factor matrix and the core tensor. This way FIGURE 42. Feature tensor generation example (n=8). We assume that the original image (800 × 800) is divided into 8 × 8 blocks, and each block is 100 × 100. Then we perform DCT transformation. Finally it is encoded into a feature tensor 8 × 8 × 100. of decomposing can reduce the amount of parameters, thus reducing the computation time, and the efficiency of largescale data processing can be improved.
A Survey on Tensor Techniques and Applications in Machine Learning <s> 6) TENSOR-BASED GRAPH EMBEDDING ALGORITHM <s> Previous work has demonstrated that the image variations of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces. The typical linear sub-space learning algorithms include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Locality Preserving Projection (LPP). All of these methods consider an n1 x n2 image as a high dimensional vector in ℝn1 x n2, while an image represented in the plane is intrinsically a matrix. In this paper, we propose a new algorithm called Tensor Subspace Analysis (TSA). TSA considers an image as the second order tensor in Rn1 ⊗ Rn2, where Rn1 and Rn2 are two vector spaces. The relationship between the column vectors of the image matrix and that between the row vectors can be naturally characterized by TSA. TSA detects the intrinsic local geometrical structure of the tensor space by learning a lower dimensional tensor subspace. We compare our proposed approach with PCA, LDA and LPP methods on two standard databases. Experimental results demonstrate that TSA achieves better recognition rate, while being much more efficient. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 6) TENSOR-BASED GRAPH EMBEDDING ALGORITHM <s> An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer - learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm. <s> BIB002
The graph embedding algorithm is generally used to better classify data by reducing the dimensionality of the data while preserving the data structure of the graph. In order to accurately classify and identify the target object in the image, (Hu et al.) BIB002 used the second-order tensor-based graph embedding to learn the discriminant subspace (discriminate the embedding space), and distinguished the target object image block and the background image block from the discriminant subspace. First they assume that input training sample set are Nth-order tensors, X a ∈ R R 1 ×R 2 ···×R n , a = 1, 2 · · · , N . They construct an intrinsic graph G i to characterize the correlation between the target sample and the background sample. In addition they also construct a penalty graph G p to characterize the difference between the target sample and the background sample to separate them from the image. These two graphs represent the geometry and discriminant structure of the input sample. Define the weight matrix of the two graphs separately, W i , W p . The element W i ab in W i represents the degree of similarity between the vertices X a and X b , and the element W p ab in W p represents the degree of difference between X a and X b . Tensor-based graph embedding aims to find a best lowdimensional tensor representation for each vertex in a graph G, and to make the low-dimensional tensor well describe the similarity between vertices. The optimal tensor representation of the vertex is obtained by solving the following optimization problem. where B n ∈ R I n ×R n are called transfer matrices, d is a constant according to the needs. In fact, we can see that this is similar to the optimal solution to the Tucker(HOSVD) decomposition with constraints. B n is actually factor matrices, and X is the core tensor in Tucker decomposition. But note that here I n ≤ R n . However, depending on the image itself, it can be seen as a matrix. (He et al.) BIB001 proposed the solution to the above problem. First, the mode-n matricization of the tensor is used to convert the above optimization problem equivalently. Note that since the input X a ∈ R R 1 ×R 2 , a = 1, 2 · · · , N are second-order tensors, according to the definition of the mode-n matricization, Then the following formula is established: According to figure 20, the mode-n product of the tensor and matrix can be converted into matrix product. Then the Y a = X a × 1m B 1 × 2m B 2 in the formula 148 becomes as follow: The optimization problem of factor formula 148 is converted into: To further simplify the operation, (He et al.) BIB001 Note that since the images are mostly two-dimensional, the above authors used the form of a second-order matrix. In fact, if the input is a higher-dimension tensor (n>3), we can still use the alternating least squares to convert the above multivariate optimization problem into a single variable optimization problem.
A Survey on Tensor Techniques and Applications in Machine Learning <s> C. APPLICATION OF TENSOR IN DATA PREPROCESSING 1) TENSOR DICTIONARY LEARNING <s> In recent years, a class of dictionaries have been proposed for multidimensional (tensor) data representation that exploit the structure of tensor data by imposing a Kronecker structure on the dictionary underlying the data. In this work, a novel algorithm called “STARK” is provided to learn Kronecker structured dictionaries that can represent tensors of any order. By establishing that the Kronecker product of any number of matrices can be rearranged to form a rank-1 tensor, we show that Kronecker structure can be enforced on the dictionary by solving a rank-1 tensor recovery problem. Because rank-1 tensor recovery is a challenging nonconvex problem, we resort to solving a convex relaxation of this problem. Empirical experiments on synthetic and real data show promising results for our proposed algorithm. <s> BIB001
Dictionary learning refers to finding a sparse representation of the original data while ensuring the structure and nondistortion of the data, thereby achieving the effect of data compression and ultimately reducing computational complexity (see figure 44 ). General dictionary learning boils down to the following optimization problems: where A ∈ R J ×I is sparse matrix, y i ∈ R J , i = 1, · · · , N are N raw data and x i ∈ R I are vectors sparsely represented. We now extend the vector to the tensor. When the input raw data Y ∈ R I 1 ×I 2 ×···I N is an Nth-order tensor, it produces tensor dictionary learning. The tensor decomposition method is usually used to solve the problem for the tensor dictionary learning. (Ghassemi et al.) BIB001 used the Kronecker product representation of Tucker decomposition to represent the above optimization problem. According to the expression of the formula 108 Tucker decomposition, we get We combine N samples as N column vectors of a new matrix Y, and get the expression of the matrix as follows: At this time we call the factor matrices the Kronecker structured(KS) matrices and let D = B N ⊗ L B N −1 · · · ⊗ L B 1 . VOLUME 7, 2019 However, a more general sparse matrix is a low rank separation structure matrix, which is the sum of KS matrices, as follows: Considering another property. Let D = B 2 ⊗ R B 1 , for the elements in D we can reconstitute the form of the vector outer product as follows: D r = vec(B 1 ) 1 • vec(B 2 ) 1 . Then we can convert the equivalent of equation 157 to the following: So we can use this structure as a regular term. Finally, we get the optimal expression for the tensor dictionary learning as follows: ||D r mn || * is the kernel norm of the matrix D r after the mode-n matricization of the tensor D r . It is generally solved by the Lagrangian multiplier method. Since the solution process is too complicated, it is omitted here. For details, please refer to (Ghassemi et al.) BIB001 .
A Survey on Tensor Techniques and Applications in Machine Learning <s> 2) TENSOR COMPLETION FOR DATA PROCESSING <s> Images, often stored in multidimensional arrays, are fast becoming ubiquitous in medical and public health research. Analyzing populations of images is a statistical problem that raises a host of daunting challenges. The most significant challenge is the massive size of the datasets incorporating images recorded for hundreds or thousands of subjects at multiple visits. We introduce the population value decomposition (PVD), a general method for simultaneous dimensionality reduction of large populations of massive images. We show how PVD can be seamlessly incorporated into statistical modeling, leading to a new, transparent, and rapid inferential framework. Our PVD methodology was motivated by and applied to the Sleep Heart Health Study, the largest community-based cohort study of sleep containing more than 85 billion observations on thousands of subjects at two visits. This article has supplementary material online. <s> BIB001 </s> A Survey on Tensor Techniques and Applications in Machine Learning <s> 2) TENSOR COMPLETION FOR DATA PROCESSING <s> Deep neural networks have been widely applied in many areas, such as computer vision, natural language processing and information retrieval. However, due to the high computation and memory demands, deep learning applications have not been adopted in edge learning. In this paper, we exploit the sparsity in tensors to reduce the computation overheads and memory demands. Unlike other approaches which rely on hardware accelerator designs or sacrifice model accuracy for the performance by pruning parameters, we adaptively partition and deploy the workload to heterogeneous devices to reduce computation and memory requirements and increase computing efficiency. We had implemented our partitioning algorithms in Google's TensorFlow and evaluated on an AMD Kaveri system, which is an HSA-based heterogeneous computing system. Our method has effectively reduced the computation time, cache accesses, and cache miss rates, without impacting the accuracy of the learning models. Our approach achieves 66% and 88% speedup for the lenet-5 model and the lenet-1024-1024 model, respectively. For reducing memory traffic, our approach reduces 71% instruction cache references, 32% data cache references. Our system has also improved cache miss rate from 1.6% to 0.5% during the training of the lenet-1024-1024 model. <s> BIB002
In data processing, sometimes there are some missing values in the data. There are many ways to complete the missing data, and the popular ones are matrix estimation and matrix completion. If the input data is tensor, then we call the tensor estimation and tensor completion. The tensor estimation and the tensor completion are similar. They are all required to solve the corresponding minimum constraint problem. However, the tensor estimation is mainly to minimize the mean square error between the estimated value and the original value. Here we mainly introduce the tensor completion. The general tensor completion aims to seek the optimal solution of the following expression: where X ∈ R I 1 ×I 2 ×···×I N is a tensor with missing values, Y ∈ R I 1 ×I 2 ×···×I N means the reconstruction tensor, means the element product (see formula BIB001 , and I ∈ R I 1 ×I 2 ×···×I N represents the indexes of missing values in X . The entries of I are as follows: The first step in such problems is usually to find a low rank approximation of the original tensor. The conventional method uses several tensor decompositions introduced in part one, such as CP, HOSVD, TT decomposition, etc. (Peng et al.) BIB002 used the HOSVD decomposition. But he did not use the traditional truncated SVD decomposition algorithm (see algorithm 2). Since traditional algorithms need to initialize the approximate rank of a given tensor first and the factor matrices, which actually requires a lot of precalculation. So they proposed an adaptive algorithm to obtain the low rank approximation of tensors. First they set an error parameter α ∈ [0, 1]. Then, similar to truncated-SVD, SVD decomposition is performed on the mode-n matricization of the core tensor mk , k = 1, 2 · · · , N , mk = U k S k V T . Where S is a diagonal matrix with nonzero entries s jj , j = 1, · · · , K , K = rank( mk ). The optimal rank can be obtained by where R 0 is the lower bound of the predefined rank, which prevents the rank from being too small. The detailed process is shown in algorithm 16.
A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Introduction <s> A quantitative measure of “information” is developed which is based on physical as contrasted with psychological considerations. How the rate of transmission of this information over a system is limited by the distortion resulting from storage of energy is discussed from the transient viewpoint. The relation between the transient and steady state viewpoints is reviewed. It is shown that when the storage of energy is used to restrict the steady state transmission to a limited range of frequencies the amount of information that can be transmitted is proportional to the product of the width of the frequency-range by the time it is available. Several illustrations of the application of this principle to practical systems are included. In the case of picture transmission and television the spacial variation of intensity is analyzed by a steady state method analogous to that commonly used for variations with time. <s> BIB001 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Introduction <s> A method is presented for quantitative evaluation of observer detection performance data based on elementary principles of information theory. The resulting index of detectability, average information content per observation, is compared with previously proposed measures of observer performance both on theoretical grounds and for the practical problem of evaluating radiographic screen-film systems. <s> BIB002 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Introduction <s> Abstract The value of a diagnostic test lies in its ability to detect patients with disease (its sensitivity) and to exclude patients without disease (its specificity). For tests with binary outcomes, these measures are fixed. For tests with a continuous scale of values, various cutoff points can be selected to adjust the sensitivity and specificity of the test to conform with the physician's goals. Principles of statistical decision theory and information theory suggest technics for objectively determining these cutoff points, depending upon whether the physician is concerned with health costs, with financial costs, or with the information content of the test. (N Engl J Med 293:211–215, 1975) <s> BIB003 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Introduction <s> The inherent imperfection of clinical diagnostic tests introduces uncertainty into their interpretation. The magnitude of diagnostic uncertainty after any test may be quantified by information theory. THe information content of the electrocardiographic ST-segment response to exercise, relative to the diagnosis of angiographic coronary artery disease, was determined using literature-based pooled estimates of the true- and false-positive rates for various magnitudes of ST depression from less than 0.5 mm to greater than or equal to 2.5 mm. This analysis allows three conclusions of clinical relevance. First, the diagnostic information content of exercise-induced ST-segment depression, interpreted by the standard 1.0-mm criterion, averages only 15% of that of coronary angiography. Second, there is a 41% increase in information content when the specific magnitude of ST-segment depression is analyzed, as opposed to the single, categorical 1-mm criterion. Third, the information obtained from ECG stress testing is markedly influenced by the prevalence of disease in the population tested, being low in the asymptomatic and typical angina groups and substantially greater in groups with nonanginal chest pain and atypical angina. The quantitation of information has broad relevance to selection and use of diagnostic tests, because one can analyze objectively the value of different interpretation criteria, compare one test with another and evaluate the cost-effectiveness of both a single test and potential testing combination. <s> BIB004 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Introduction <s> Repressors, polymerases, ribosomes and other macromolecules bind to specific nucleic acid sequences. They can find a binding site only if the sequence has a recognizable pattern. We define a measure of the information (R sequence) in the sequence patterns at binding sites. It allows one to investigate how information is distributed across the sites and to compare one site to another. One can also calculate the amount of information (R frequency) that would be required to locate the sites, given that they occur with some frequency in the genome. Several Escherichia coli binding sites were analyzed using these two independent empirical measurements. The two amounts of information are similar for most of the sites we analyzed. In contrast, bacteriophage T7 RNA polymerase binding sites contain about twice as much information as is necessary for recognition by the T7 polymerase, suggesting that a second protein may bind at T7 promoters. The extra information can be accounted for by a strong symmetry element found at the T7 promoters. This element may be an operator. If this model is correct, these promoters and operators do not share much information. The comparisons between R sequence and R frequency suggest that the information at binding sites is just sufficient for the sites to be distinguished from the rest of the genome. <s> BIB005 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Introduction <s> Relative entropy is a concept within information theory that provides a measure of the distance between two probability distributions. The author proposes that the amount of information gained by performing a diagnostic test can be quantified by calculating the relative entropy between the posttest and pretest probability distributions. This statistic, in essence, quantifies the degree to which the results of a diagnostic test are likely to reduce our surprise upon ultimately learning a patient's diagnosis. A previously proposed measure of diagnostic information that is also based on information theory (pretest entropy minus posttest entropy) has been criticized as failing, in some cases, to agree with our intuitive concept of diagnostic information. The proposed formula passes the tests used to challenge this previous measure. <s> BIB006 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Introduction <s> Abstract The problem of assessing the quality of an operational forecasting system that produces probabilistic forecasts is addressed using information theory. A measure of the quality of the forecasting scheme, based on the amount of a data compression it allows, is outlined. This measure, called ignorance, is a logarithmic scoring rule that is a modified version of relative entropy and can be calculated for real forecasts and realizations. It is equivalent to the expected returns that would be obtained by placing bets proportional to the forecast probabilities. Like the cost–loss score, ignorance is not equivalent to the Brier score, but, unlike cost–loss scores, ignorance easily generalizes beyond binary decision scenarios. The use of the skill score is illustrated by evaluating the ECMWF ensemble forecasts for temperature at London's Heathrow airport. <s> BIB007 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Introduction <s> Objectives: This paper demonstrates that diagnostic test performance can be quantified as the average amount of information the test result (R) provides about the disease state (D). Methods: A fundamental concept of information theory, mutual information, is directly applicable to this problem. This statistic quantifies the amount of information that one random variable contains about another random variable. Prior to performing a diag-nostic test, R and D are random variables. Hence, their mutual information, I(D;R), is the amount of information that R provides about D. Results: I(D;R) is a function of both 1) the pretest probabilities of the disease state and 2) the set of conditional probabilities relating each possible test result to each possible disease state. The area under the receiver operating characteristic curve (AUC) is a popular measure of diagnostic test performance which, in contrast to I(D;R), is independent of the pretest probabilities; it is a function of only the set of conditional probabilities. The AUC is not a measure of diagnostic information. Conclusions: Because I(D;R) is dependent upon pretest probabilities, knowledge of the setting in which a diagnostic test is employed is a necessary condition for quantifying the amount of information it provides. Advantages of I(D;R) over the AUC are that it can be calculated without invoking an arbitrary curve fitting routine, it is applicable to situations in which multiple diagnoses are under consideration, and it quantifies test performance in meaningful units (bits of information). <s> BIB008 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Introduction <s> Binary predictors are used in a wide range of crop protection decision making applications. Such predictors provide a simple analytical apparatus for the formulation of evidence related to risk factors, for use in the process of Bayesian updating of probabilities of crop disease. For diagrammatic interpretation of diagnostic probabilities, the receiver operating characteristic is available. Here, we view binary predictors from the perspective of diagnostic information. After a brief introduction to the basic information theoretic concepts of entropy and expected mutual information, we use an example data set to provide diagrammatic interpretations of expected mutual information, relative entropy, information inaccuracy, information updating and specific information. Our information graphs also illustrate correspondences between diagnostic information and diagnostic probabilities. <s> BIB009
Information theory was developed during the first half of the twentieth century to quantify aspects of communication. The pioneering work of Ralph Hartley and, subsequently, Claude Shannon was primarily motivated by problems associated with electronic communication systems BIB001 . Information theory was probably first used to quantify clinical diagnostic information by . Subsequent papers helped to clarify the ability of information theory to quantify diagnostic uncertainty, diagnostic information, and diagnostic test performance, e.g., BIB002 BIB003 BIB004 BIB006 . Although applications of information theory can be highly technical, fundamental concepts of information theory are not difficult to understand. Moreover, they are profound in the sense that they apply to situations in which "communication" is broadly defined. Fundamental information theory functions are defined on random variables. The ubiquity of random processes accounts for the wide range of applications of the theory. Examples of areas of application include meteorology BIB007 , molecular biology BIB005 , quantum mechanics , psychology , plant pathology BIB009 , and music . The random variables of interest to the present discussion are an individual's disease state (D) and diagnostic test result (R). We require that the possible disease states be mutually exclusive and that, for each diagnostic test performed, one result is obtained. Hence, it is meaningful to talk about the probability that an individual randomly selected from a population is in a certain disease state and has a certain test result. The primary purpose of this review is to understand the answers that information theory gives to the following three questions: (1) How do we quantify our uncertainty about the disease state of a given individual? After a diagnostic test is performed and a specific test result is obtained, how do we quantify the information we have received about the tested individual's disease state? (3) Prior to performing a diagnostic test, how do we quantify the amount of information that we expect to receive about the disease state of the tested individual? The answers that information theory gives to these questions are calculated using pretest and posttest probabilities. Whenever the pre-test and post-test probabilities differ, the test has provided diagnostic information [16] . The functions are applicable to situations in which any number of disease states are under consideration and in which the diagnostic test can yield any number of results (or continuous results) BIB008 . Moreover, a given test result can alter the probabilities of multiple possible disease states. Since information theory functions depend only upon the probabilities of states, the information content of an observation does not take into consideration the meaning or value of the states (p. 8). For example, the statement that a patient died who had been given a 50-50 chance of survival contains the same amount of information, from an information theory perspective, as the statement that a tossed coin turned up heads. More than one diagnostic test is often required to help clarify a patient's disease state. Hence, an additional goal of this review is to answer questions 2 and 3, above, for the case in which two or more diagnostic tests are performed. We find that it is possible to quantify both the information that we have received from each of two or more diagnostic tests as well as the information that we expect to receive by performing two or more diagnostic tests. The foundational theorem of information theory is the statement proved by Shannon that the entropy function, discussed below, is the only function that satisfies certain criteria that we require of a measure of the uncertainty about the outcome of a random variable . As an alternative to this axiomatic approach to deriving information theory functions, we employ the concept of the surprisal, with the goal of achieving a more intuitive understanding of these functions. The surprisal function is explained in the following section. It is then used in Section 3 to answer the above three questions and, in doing so, derive expressions for three fundamental information theory functions: the entropy function (Section 3.1), the relative entropy function (Section 3.2), and the mutual information function (Section 3.3). The application of information theory functions to situations in which more than one diagnostic test is performed is considered in Section 4. Section 5 provides a brief review of the history of the application of information theory to clinical diagnostic testing. Examples which offer insight into what information theory can teach us about clinical diagnostic testing are presented in Section 6. The paper concludes by briefly summarizing and clarifying important concepts.
A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> The Surprisal Function <s> A method is presented for quantitative evaluation of observer detection performance data based on elementary principles of information theory. The resulting index of detectability, average information content per observation, is compared with previously proposed measures of observer performance both on theoretical grounds and for the practical problem of evaluating radiographic screen-film systems. <s> BIB001 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> The Surprisal Function <s> Abstract The value of a diagnostic test lies in its ability to detect patients with disease (its sensitivity) and to exclude patients without disease (its specificity). For tests with binary outcomes, these measures are fixed. For tests with a continuous scale of values, various cutoff points can be selected to adjust the sensitivity and specificity of the test to conform with the physician's goals. Principles of statistical decision theory and information theory suggest technics for objectively determining these cutoff points, depending upon whether the physician is concerned with health costs, with financial costs, or with the information content of the test. (N Engl J Med 293:211–215, 1975) <s> BIB002 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> The Surprisal Function <s> Objectives: Mutual information is a fundamental concept of information theory that quantifies the expected value of the amount of information that diagnostic testing provides about a patient’s disease state. The purpose of this report is to provide both intuitive and axiomatic descriptions of mutual information and, thereby, promote the use of this statistic as a measure of diagnostic test performance. Methods: We derive the mathematical expression for mutual information from the intuitive assumption that diagnostic information is the average amount that diagnostic testing reduces our surprise upon ultimately learning a patient’s diagnosis. This concept is formalized by defining “surprise” as the surprisal, a function that quantifies the unlikelihood of an event. Mutual information is also shown to be the only function that conforms to a set of axioms which are reasonable requirements of a measure of diagnostic information. These axioms are related to the axioms of information theory used to derive the expression for entropy. Results: Both approaches to defining mutual information lead to the known relationship that mutual information is equal to the pretest uncertainty of the disease state minus the expected value of the posttest uncertainty of the disease state. Mutual information also has the property of being additive when a test provides information about independent health problems. Conclusion: Mutual information is the best single measure of the ability of a diagnostic test to discriminate among the possible disease states. <s> BIB003 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> The Surprisal Function <s> Peter Harremoes¨Centrum Wiskunde & Informatica, Kruislaan 413, 1090 GB Amsterdam, Noord-Holland, TheNetherlands. E-mail: [email protected]: 27 December 2008 / Published: 28 December 2009Peter Harremoes¨I became the Editor-in-Chief of Entropy in July 2008. As the new Editor-in-Chief, I would like topresent my opinion on some changes regarding requirements for publishing in this journal.Terminology and Entropy ConceptsAuthors that publish articles in Entropy come from various fields. It is therefore obvious that theydescribe their problems using slightly different terminology. On the other hand, Entropy should bridgethe different fields, and for this reason a more universal language is preferable. Often the authors in aspecial field simply do not know the terminology used in other fields. This should not be accepted inan interdisciplinary journal. For this reason we have decided to be more strict about the entropy relatedquantities that the authors use. For instance there is a one-to-one correspondence between Tsallis entropyand Renyi entropy, but that does not mean that it makes no difference whether one uses one or the other.´For some problems Tsallis entropy may be more natural and for some problems Renyi entropy may be´more useful. In all articles we publish, we will therefore require that the most suitable entropy conceptis used – even if that means that the authors have to make major revisions in order to accomplish thisrequirement. <s> BIB004
The surprisal function, µ, quantifies the unlikelihood of an event [19, BIB003 . It is a function of the probability (p) of the event. As its name suggests, it can be thought of as a measure of the amount we are surprised when an event occurs. Hence, this function assigns larger values to less likely events. Another reasonable requirement of the surprisal function is that, for independent events a 1 and a 2 , the surprisal associated with the occurrence of both events should equal the sum of the surprisals associated with each event. Since a 1 and a 2 are independent, p(a 1 , a 2 ) = p(a 1 )p(a 2 ). We therefore require that µ[p(a 1 )p(a 2 )] = µ[p(a 1 )] + µ[p(a 2 )]. The only non-negative function that meets these requirements is of the form: Ref. (pp. BIB001 BIB002 . The choice of the base of the logarithm is arbitrary in the sense that conversion from one base to another is accomplished by multiplication by a constant. Two is often selected as the base of the logarithm, giving measurements in units of bits (binary digits). Some authors use the natural logarithm (giving measurements in units of nats) or log base 10 (giving measurements in units Entropy 2020, 22, 97 3 of 20 of hartleys) BIB004 . Using log base two, the surprise when a fair coin turns up heads is quantified as one bit, since − log 2 (1/2) = 1. Figure 1 plots the surprisal function (in units of bits) over the range of probabilities. Observe that the surprisal associated with the occurrence of an event that is certain to occur is zero, and that there is no number large enough to quantify the surprise associated with the occurrence of an impossible event. Entropy 2020, 22, x FOR PEER REVIEW 3 of 19 to occur is zero, and that there is no number large enough to quantify the surprise associated with the occurrence of an impossible event.
A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Mutual Information Applied to the Case of Multiple Diagnostic Tests <s> The inherent imperfection of clinical diagnostic tests introduces uncertainty into their interpretation. The magnitude of diagnostic uncertainty after any test may be quantified by information theory. THe information content of the electrocardiographic ST-segment response to exercise, relative to the diagnosis of angiographic coronary artery disease, was determined using literature-based pooled estimates of the true- and false-positive rates for various magnitudes of ST depression from less than 0.5 mm to greater than or equal to 2.5 mm. This analysis allows three conclusions of clinical relevance. First, the diagnostic information content of exercise-induced ST-segment depression, interpreted by the standard 1.0-mm criterion, averages only 15% of that of coronary angiography. Second, there is a 41% increase in information content when the specific magnitude of ST-segment depression is analyzed, as opposed to the single, categorical 1-mm criterion. Third, the information obtained from ECG stress testing is markedly influenced by the prevalence of disease in the population tested, being low in the asymptomatic and typical angina groups and substantially greater in groups with nonanginal chest pain and atypical angina. The quantitation of information has broad relevance to selection and use of diagnostic tests, because one can analyze objectively the value of different interpretation criteria, compare one test with another and evaluate the cost-effectiveness of both a single test and potential testing combination. <s> BIB001 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Mutual Information Applied to the Case of Multiple Diagnostic Tests <s> Repressors, polymerases, ribosomes and other macromolecules bind to specific nucleic acid sequences. They can find a binding site only if the sequence has a recognizable pattern. We define a measure of the information (R sequence) in the sequence patterns at binding sites. It allows one to investigate how information is distributed across the sites and to compare one site to another. One can also calculate the amount of information (R frequency) that would be required to locate the sites, given that they occur with some frequency in the genome. Several Escherichia coli binding sites were analyzed using these two independent empirical measurements. The two amounts of information are similar for most of the sites we analyzed. In contrast, bacteriophage T7 RNA polymerase binding sites contain about twice as much information as is necessary for recognition by the T7 polymerase, suggesting that a second protein may bind at T7 promoters. The extra information can be accounted for by a strong symmetry element found at the T7 promoters. This element may be an operator. If this model is correct, these promoters and operators do not share much information. The comparisons between R sequence and R frequency suggest that the information at binding sites is just sufficient for the sites to be distinguished from the rest of the genome. <s> BIB002 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Mutual Information Applied to the Case of Multiple Diagnostic Tests <s> Abstract The problem of assessing the quality of an operational forecasting system that produces probabilistic forecasts is addressed using information theory. A measure of the quality of the forecasting scheme, based on the amount of a data compression it allows, is outlined. This measure, called ignorance, is a logarithmic scoring rule that is a modified version of relative entropy and can be calculated for real forecasts and realizations. It is equivalent to the expected returns that would be obtained by placing bets proportional to the forecast probabilities. Like the cost–loss score, ignorance is not equivalent to the Brier score, but, unlike cost–loss scores, ignorance easily generalizes beyond binary decision scenarios. The use of the skill score is illustrated by evaluating the ECMWF ensemble forecasts for temperature at London's Heathrow airport. <s> BIB003
The mutual information common to random variables X, Y and Z is defined as where I((X; Y) Z) = I((X|Z); (Y|Z)) is the mutual information between and Y conditional upon Z [23] (p. 45). Hence, from Equations (5), BIB001 , and (9): Although the mutual information between two random variables is always nonnegative, the mutual information among three random variables can be positive, negative, or zero [23] (p. 45). The expected value of the amount of information that two diagnostic tests, A and B, will provide about the disease state is I(D; (R A , R B )). This can be expressed in terms of entropies (per Equation (5)) as and it can be partitioned: Equation (12) can be proved by using Equations (5), BIB003 , and (11) to replace the four mutual information terms with their entropy equivalents. Hence, the expected value of the information that tests A and B provide about disease state D is equal to the sum of the expected values of the information provided by each test minus I(D; R A ; R B ), a term that quantifies the interaction among D, R A , and R B . Since I(D; R A ; R B ) can be positive, negative, or zero, I(D; (R A , R B )) can be less than, greater than, or equal to the sum of I(D; (R A )) and I(D; (R B )), respectively. Alternatively, we can use Equations (9) and (12) to partition I(D; (R A , R B )) as follows: where can be expressed in terms of entropies using Equations (5) and (6): Although the expressions become more complicated as the number of diagnostic tests increase, the mutual information between the disease state and the results of multiple diagnostic tests can be partitioned in fashions analogous to Equations and . For the case in which there are three diagnostic tests: These equations can be proven, once again, by replacing the mutual information terms with their entropy equivalents, recognizing that The entropy function, expressed as Equation (2), is not defined for continuous random variables. Nevertheless, the mutual information between or among continuous random variables, which is defined, can be approximated numerically as the sum and differences of entropies using Equations (5), (10), BIB002 , and (14) [23] (pp. 231-232).
A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Historical Background <s> A quantitative measure of “information” is developed which is based on physical as contrasted with psychological considerations. How the rate of transmission of this information over a system is limited by the distortion resulting from storage of energy is discussed from the transient viewpoint. The relation between the transient and steady state viewpoints is reviewed. It is shown that when the storage of energy is used to restrict the steady state transmission to a limited range of frequencies the amount of information that can be transmitted is proportional to the product of the width of the frequency-range by the time it is available. Several illustrations of the application of this principle to practical systems are included. In the case of picture transmission and television the spacial variation of intensity is analyzed by a steady state method analogous to that commonly used for variations with time. <s> BIB001 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Historical Background <s> A method is presented for quantitative evaluation of observer detection performance data based on elementary principles of information theory. The resulting index of detectability, average information content per observation, is compared with previously proposed measures of observer performance both on theoretical grounds and for the practical problem of evaluating radiographic screen-film systems. <s> BIB002 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Historical Background <s> Abstract The value of a diagnostic test lies in its ability to detect patients with disease (its sensitivity) and to exclude patients without disease (its specificity). For tests with binary outcomes, these measures are fixed. For tests with a continuous scale of values, various cutoff points can be selected to adjust the sensitivity and specificity of the test to conform with the physician's goals. Principles of statistical decision theory and information theory suggest technics for objectively determining these cutoff points, depending upon whether the physician is concerned with health costs, with financial costs, or with the information content of the test. (N Engl J Med 293:211–215, 1975) <s> BIB003 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Historical Background <s> The inherent imperfection of clinical diagnostic tests introduces uncertainty into their interpretation. The magnitude of diagnostic uncertainty after any test may be quantified by information theory. THe information content of the electrocardiographic ST-segment response to exercise, relative to the diagnosis of angiographic coronary artery disease, was determined using literature-based pooled estimates of the true- and false-positive rates for various magnitudes of ST depression from less than 0.5 mm to greater than or equal to 2.5 mm. This analysis allows three conclusions of clinical relevance. First, the diagnostic information content of exercise-induced ST-segment depression, interpreted by the standard 1.0-mm criterion, averages only 15% of that of coronary angiography. Second, there is a 41% increase in information content when the specific magnitude of ST-segment depression is analyzed, as opposed to the single, categorical 1-mm criterion. Third, the information obtained from ECG stress testing is markedly influenced by the prevalence of disease in the population tested, being low in the asymptomatic and typical angina groups and substantially greater in groups with nonanginal chest pain and atypical angina. The quantitation of information has broad relevance to selection and use of diagnostic tests, because one can analyze objectively the value of different interpretation criteria, compare one test with another and evaluate the cost-effectiveness of both a single test and potential testing combination. <s> BIB004 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Historical Background <s> A representation and interpretation of the area under a receiver operating characteristic (ROC) curve obtained by the "rating" method, or by mathematical predictions based on patient characteristics, is presented. It is shown that in such a setting the area represents the probability that a randomly chosen diseased subject is (correctly) rated or ranked with greater suspicion than a randomly chosen non-diseased subject. Moreover, this probability of a correct ranking is the same quantity that is estimated by the already well-studied nonparametric Wilcoxon statistic. These two relationships are exploited to (a) provide rapid closed-form expressions for the approximate magnitude of the sampling variability, i.e., standard error that one uses to accompany the area under a smoothed ROC curve, (b) guide in determining the size of the sample required to provide a sufficiently reliable estimate of this area, and (c) determine how large sample sizes should be to ensure that one can statistically detect difference... <s> BIB005 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Historical Background <s> Abstract We describe a mathematical technique and an associated computer program for comparing, evaluating and optimizing diagnostic tests. The technique combines receiver operating characteristic (ROC) analysis with information theory and cost-benefit analysis to accomplish this. The program is menu driven and highly interactive; it generates 13 possible user-determined ASCII disk files which can be easily converted to graphs. These graphs allow the user to make detailed comparisons among various diagnostic tests for all values of disorder prevalence, and also provide guidelines for cut-off selection in order to optimize tests. These techniques are applied to three published studies of the enzyme screening assay for diagnosis of infection with the HIV virus. We show how graphs produced by this program can be used to compare and optimize these diagnostic tests. The program is written for an IBM-compatible microcomputer running on a DOS operating system. <s> BIB006 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Historical Background <s> Background To select a proper diagnostic test, it is recommended that the most specific test be used to confirm (rule in) a diagnosis, and the most sensitive test be used to establish that a disease is unlikely (rule out). These rule-in and rule-out concepts can also be characterized by the likelihood ratio (LR). However, previous papers discussed only the case of binary tests and assumed test results already known. Methods The author proposes using the ‘Kullback-Leibler distance’ as a new measure of rule-in/out potential. The Kullback-Leibler distance is an abstract concept arising from statistics and information theory. The author shows that it integrates in a proper way two sources of information—the distribution of test outcomes and the LR function. The index predicts the fate of an average subject before testing. Results Analysis of real and hypothetical data demonstrates its applications beyond binary tests. It works even when the conventional methods of dichotomization and ROC curve analysis fail. Conclusions The Kullback-Leibler distance nicely characterizes the before-test rule-in/out potentials. It offers a new perspective from which to evaluate a diagnostic test. <s> BIB007 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Historical Background <s> Relative entropy is a concept within information theory that provides a measure of the distance between two probability distributions. The author proposes that the amount of information gained by performing a diagnostic test can be quantified by calculating the relative entropy between the posttest and pretest probability distributions. This statistic, in essence, quantifies the degree to which the results of a diagnostic test are likely to reduce our surprise upon ultimately learning a patient's diagnosis. A previously proposed measure of diagnostic information that is also based on information theory (pretest entropy minus posttest entropy) has been criticized as failing, in some cases, to agree with our intuitive concept of diagnostic information. The proposed formula passes the tests used to challenge this previous measure. <s> BIB008 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Historical Background <s> We apply the information theory concept of "channel capacity" to diagnostic test performance and derive an expression for channel capacity in terms of test sensitivity and test specificity. The expected value of the amount of information a diagnostic test will provide is equal to the "mutual information" between the test result and the disease state. For the case in which only two test results and two disease states are considered, mutual information, I(D;R), is a function of sensitivity, specificity, and the pretest probability of disease. The channel capacity of the test is the maximal value of I(D;R) for a given sensitivity and specificity. After deriving an expression for I(D;R) in terms of sensitivity, specificity, and pretest probability, we solve for the value of pretest probability that maximizes I(D;R). Channel capacity is obtained by using this value of pretest probability to calculate I(D;R). Channel capacity provides a convenient and meaningful single parameter measure of diagnostic test performance. It quantifies the upper limit of the amount of information a diagnostic test can be expected to provide about a patient's disease state. <s> BIB009
With this understanding of basic information theory functions, we can briefly consider the development of information theory and the evolution of its application to a clinical diagnostic testing. The concept of entropy is probably most familiar within the context of thermodynamics, where it is a measure of the "degree of randomness" of a physical system (p. 12). Although an understanding of the basic principles of thermodynamics preceded the development of information theory, the entropy of thermodynamics can be understood to be an application of the concept of entropy stated by Equation . The difference between the two functions is that in thermodynamics Equation (2) is multiplied by the Boltzmann constant to provide the appropriate physical dimensions (joules per kelvin) (p. 30). As mentioned in Section 1, Hartley and Shannon were early developers of information theory. Hartley published a paper in 1928 concerning the relationship between the quantity of information transmitted over a system and the width of the frequency range of the transmission BIB001 . He defined entropy (which he called information) for situations in which the possible states are equally likely. The more general concept of entropy, stated by Equation (2), was defined in 1948 by Shannon in "A Mathematical Theory of Communication" . This foundational paper also defined mutual information and channel capacity. Relative entropy was introduced by Kullback and Leibler in 1951 . The applicability of information theory to a clinical diagnostic testing was not immediately recognized, has been slow in its development, and remains an area of research. As noted in the introduction, Good and Card probably published the first paper on the subject in 1971 . Their contribution was not recognized by many subsequent authors interested in this subject. To a large extent, the history of the application of information theory to clinical diagnostic testing is the history of the discovery of concepts previously understood by Good and Card. They recognized that mutual information (what they called mean information transfer) quantifies the expected value of the amount of information provided by a test and that this function can be used regardless of the number of disease states and test results. Implicit in their report is the use of relative entropy and, what we have called, modified relative entropy (in their language, dinegentropy and trientropy, respectively) to quantify the information provided by specific test results. They also quantified the information gained by sequential testing . The "weight of evidence in favor of a hypothesis" is a central concept in the Good and Card paper . The concept was developed independently by C.S. Peirce and A.M. Turing (possibly in collaboration with I.J. Good) . The weight of evidence in favor of disease state d i given result r j , as opposed to the other disease states, d i can be expressed as This is equal to As pointed out by Good and Card, we find by looking at each of the above two expressions in brackets (which are reductions in surprisals) that weight of evidence can be interpreted in terms of quantities of information; in this case, as the amount of information that r j provides about d i minus the amount of information that r j provides about d i . A second important observation about the weight of evidence is that it is equal to the logarithm of a likelihood ratio. This point has been used to advantage by Van den Ende et al. to provide clinicians with an accessible approach to interpreting diagnostic tests, including the fact that the logarithm of the pretest odds plus the weight of evidence equals the logarithm of the posttest odds . Since weight of evidence can be interpreted in terms of quantities of information, the logarithm of a likelihood ratio is an information quantity and so has information units. When working in log base 10 (as in ) the appropriate unit is the hartley. To convert from hartleys to bits, divide by log 10 2 = 0.301. Most papers on the application of information theory to clinical diagnostic testing are founded upon a report published by Metz, Goodenough, and Rossmann in 1973 BIB002 . They derived the expression for the information content (mutual information) of a diagnostic test as a function of the pretest probability of disease and the test's true positive rate (probability of a positive result given disease) and false positive rate (probability of a positive result given no disease), i.e., they used these parameters to calculate the posttest probability distribution and then used the pretest and posttest distributions to calculate mutual information. They applied the theory to the evaluation of radiographic systems and noted that this statistic can be used to compare points on the same or different receiver operating characteristic (ROC) curves (defined below in Section 6.1) [31] . The area under the ROC curve (AUC) is a popular measure of diagnostic test performance BIB005 . Relationships between the AUC and mutual information are discussed in the example presented in Section 6.1. Metz et al. also suggested that the performance of a diagnostic test be quantified as the maximum of the set of information contents associated with the points on a test's ROC curve (I max ). Subsequent authors suggested that I max can be used in the selection of the point that partitions test results into normal results and abnormal results BIB003 BIB006 , i.e., the diagnostic cutoff. The use of a diagnostic cutoff, however, can result in some loss of diagnostic information BIB004 . This is illustrated by examples presented in Sections 6.2 and 6.3. Diamond and colleagues applied information theory in 1981 to the quantification of the performance of the exercise electrocardiogram (ECG) in the diagnosis of coronary heart disease (CHD) BIB004 . This paper is discussed in Section 6.2. The primary theoretical contribution of their paper is the recognition that it is not necessary to select a single diagnostic cutoff in order to calculate the information content (mutual information) provided by a diagnostic test. This concept is implicit in the work of Good and Card . The relative entropy function was applied to clinical diagnostic testing in 1999 by Lee BIB007 and, independently, by Benish BIB008 . Lee used the relative entropy between the distributions of test results for diseased subjects and disease-free subjects to characterize the potential of a diagnostic test to rule in (confirm) and rule out (exclude) disease. A different approach to characterizing the potential of a diagnostic test to rule in or rule out disease states is illustrated by examples presented in Sections 6.2 and 6.4. Benish recognized that the relative entropy function allows for calculation of the information provided by a specific test result. Once again, this observation is implicit in the paper by Good and Card . Use of the relative entropy function for this purpose is discussed above in Sections 3.2 and 4.1 and is demonstrated in Sections 6.2, 6.4 and 6.5. Benish also discussed the channel capacity of a medical diagnostic test BIB009 . Hughes, writing from the perspective of a plant disease epidemiologist, published the only book on the application of information theory to diagnostic testing in 2012 . Section 4, above, develops concepts found in the work by Good and Card regarding the quantification of information provided by multiple diagnostic tests . These functions are illustrated in the examples presented in Sections 6.4 and 6.5.
A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> The Relationship between I(D;R) and the AUC <s> A representation and interpretation of the area under a receiver operating characteristic (ROC) curve obtained by the "rating" method, or by mathematical predictions based on patient characteristics, is presented. It is shown that in such a setting the area represents the probability that a randomly chosen diseased subject is (correctly) rated or ranked with greater suspicion than a randomly chosen non-diseased subject. Moreover, this probability of a correct ranking is the same quantity that is estimated by the already well-studied nonparametric Wilcoxon statistic. These two relationships are exploited to (a) provide rapid closed-form expressions for the approximate magnitude of the sampling variability, i.e., standard error that one uses to accompany the area under a smoothed ROC curve, (b) guide in determining the size of the sample required to provide a sufficiently reliable estimate of this area, and (c) determine how large sample sizes should be to ensure that one can statistically detect difference... <s> BIB001
ROC curves are often used to describe the performance of a diagnostic test when the test results lie on a continuum or are otherwise ordered [31] . This methodology is applicable when two disease states are under consideration, e.g., disease present and disease absent. A ROC curve plots the tradeoff between the true positive rate (test sensitivity) and the false positive rate (1 − test specificity) as the cutoff point for defining normal and abnormal test results is moved along the ordered set of results. As noted above, the AUC is a popular measure of diagnostic test performance BIB001 . Both the AUC and I(D; R) are single-parameter measures of diagnostic test performance. It is helpful to understand some of their differences. A classic approach to explaining ROC curves is to assume that test results are normally distributed for both healthy (d−) and diseased (d+) individuals. This is illustrated by the Figure 2 insert. The ROC curve is then constructed, as noted above, by plotting test sensitivity as a function of 1−test specificity for all possible diagnostic cutoffs. As the distance between the means of the two distributions increases, the ROC curve shifts upward and to the left, increasing the AUC from a value of 0.5 toward its maximal value of one. This is illustrated in Figure 2 , which includes a plot of the AUC as a function of the separation between the means, for the case in which the standard deviations of both distributions are one. I(D; R), but not the AUC, is a function of the pretest probability of disease. This is illustrated in the figure by plots of I(D; R) as a function of the distance between the means of the same two distributions for three pretest probabilities of disease: 0.1, 0.2, and 0.5. The figure also plots a transformation of the AUC, AUC*, which is equal to 2(AUC) − 1. This transformation of the AUC changes its range from [0.5,1] to [0,1]. Collectively, these plots demonstrate that the AUC and I(D; R) are qualitatively different statistics. lie on a continuum or are otherwise ordered [31] . This methodology is applicable when two disease states are under consideration, e.g., disease present and disease absent. A ROC curve plots the tradeoff between the true positive rate (test sensitivity) and the false positive rate (1 -test specificity) as the cutoff point for defining normal and abnormal test results is moved along the ordered set of results. As noted above, the AUC is a popular measure of diagnostic test performance BIB001 . Both the AUC and ( ; ) are single-parameter measures of diagnostic test performance. It is helpful to understand some of their differences. A classic approach to explaining ROC curves is to assume that test results are normally distributed for both healthy (d−) and diseased (d+) individuals. This is illustrated by the Figure 2 insert. The ROC curve is then constructed, as noted above, by plotting test sensitivity as a function of 1−test specificity for all possible diagnostic cutoffs. As the distance between the means of the two distributions increases, the ROC curve shifts upward and to the left, increasing the AUC from a value of 0.5 toward its maximal value of one. This is illustrated in Figure 2 , which includes a plot of the AUC as a function of the separation between the means, for the case in which the standard deviations of both distributions are one. ( ; ), but not the AUC, is a function of the pretest probability of disease. This is illustrated in the figure by plots of ( ; ) as a function of the distance between the means of the same two distributions for three pretest probabilities of disease: 0.1, 0.2, and 0.5. The figure also plots a transformation of the AUC, AUC*, which is equal to 2(AUC) − 1. This transformation of the AUC changes its range from [0.5,1] to [0,1]. Collectively, these plots demonstrate that the AUC and ( ; ) are qualitatively different statistics.
A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Diagnostic Information from the Exercise Electrocardiogram (ECG) <s> The inherent imperfection of clinical diagnostic tests introduces uncertainty into their interpretation. The magnitude of diagnostic uncertainty after any test may be quantified by information theory. THe information content of the electrocardiographic ST-segment response to exercise, relative to the diagnosis of angiographic coronary artery disease, was determined using literature-based pooled estimates of the true- and false-positive rates for various magnitudes of ST depression from less than 0.5 mm to greater than or equal to 2.5 mm. This analysis allows three conclusions of clinical relevance. First, the diagnostic information content of exercise-induced ST-segment depression, interpreted by the standard 1.0-mm criterion, averages only 15% of that of coronary angiography. Second, there is a 41% increase in information content when the specific magnitude of ST-segment depression is analyzed, as opposed to the single, categorical 1-mm criterion. Third, the information obtained from ECG stress testing is markedly influenced by the prevalence of disease in the population tested, being low in the asymptomatic and typical angina groups and substantially greater in groups with nonanginal chest pain and atypical angina. The quantitation of information has broad relevance to selection and use of diagnostic tests, because one can analyze objectively the value of different interpretation criteria, compare one test with another and evaluate the cost-effectiveness of both a single test and potential testing combination. <s> BIB001
As noted in the preceding section, Diamond et al. used information theory to evaluate the performance of the exercise ECG in the diagnosis of CHD BIB001 . Depression of the ST segment (a portion of the ECG tracing) during exercise is an indicator of coronary artery disease. The data in Table 2 shows their estimates of the probability of ST segment depression falling into six different categories as a function of whether the patient has significant CHD. They first analyzed the data by selecting a criterion to dichotomize the results into positive and negative categories. For example, if a positive test is defined as ST depression ≥ 1 mm, then, as seen from the table, p(r+ d+) becomes 0.233 + 0.088 + 0.133 + 0.195 = 0.649 and p(r+ d−) becomes 0.110 + 0.021 + 0.012+ 0.005 = 0.148. Recognizing that p d i , r j = p r j d i p(d i ) and p r j = p d+, r j + p d−, r j , Equation (4) can then be used to calculate the information content (mutual information) for the test for this cutoff as a function of the pretest probability of disease. BIB001 showing the probabilities of various categories of ST segment depression (the result, r) during an exercise electrocardiogram as a function of the presence (d+) and absence (d−) of significant CHD. They contrast this with a calculation of the information content (mutual information) if the results are not dichotomized, but rather left partitioned into six categories. If the ST segment is depressed by 2.2 mm for example, it makes sense to calculate the posttest probability using the more accurate test operating characteristics that apply to the narrower interval of [2, 2.5) than the operating characteristics that apply to the larger interval of [1, ∞) . Equation (4) is again used to make the calculation, but in this case, there are six possible test results rather than two. Figure 3 (reconstructed from their report with permission) compares mutual information as a function of pretest probability of significant CHD for the dichotomized and non-dichotomized approaches. The curve labeled IDEAL is the pretest diagnostic uncertainty as a function of pretest probability. It indicates the average amount of information that an ideal test would provide, i.e., the average amount of information needed to reduce the diagnostic uncertainty to zero (by yielding a posttest probability of either zero or one). We observe that, for most of the range of pre-test probabilities, approximately one third of the diagnostic information is lost by dichotomizing the results with a diagnostic cutoff of 1 mm. The issue of information lost as a consequence of dichotomizing test results is considered again in the following subsection. Although, on average, the exercise ECG does not provide much information about whether a patient has significant CHD, the possibility remains that specific test results are informative. To illustrate this, we consider the two results that lie on opposite ends of the test result spectrum: ST depression < 0.5 mm and ST depression ≥ 2.5 mm. Recall that relative entropy (Equation (3)) quantifies the amount of diagnostic information provided by a given test result. Figure 4 plots relative entropy as a function of the pretest probability of significant CHD for these two test results. For Although, on average, the exercise ECG does not provide much information about whether a patient has significant CHD, the possibility remains that specific test results are informative. To illustrate this, we consider the two results that lie on opposite ends of the test result spectrum: ST depression < 0.5 mm and ST depression ≥ 2.5 mm. Recall that relative entropy (Equation (3)) quantifies the amount of diagnostic information provided by a given test result. Figure 4 plots relative entropy as a function of the pretest probability of significant CHD for these two test results. For comparison, the figure includes relative entropy plots for a theoretical ideal test when significant CHD is present (d+) and when significant CHD is absent (d−). Inspecting these curves, we conclude that an ST depression of <0.5 mm is not helpful in ruling out significant CHD. On the other hand, when significant CHD is present and as the pre-test probability increases, the information provided by an ST depression of ≥2.5 mm approaches the information provided by the ideal test. Figure 3 . Mutual information as a function of pretest probability of significant coronary heart disease (CHD) for the exercise electrocardiogram. The plot compares the performance of a theoretical ideal test with the actual performance when the results are either (1) dichotomized using the criterion of ST segment depression of ≥ 1 mm or (2) not dichotomized. This plot has been reconstructed with permission from the paper by Diamond et al. BIB001 .
A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Diagnostic Information Provided by Two Tests with Discrete Results <s> Summary. Suspected deep vein thrombosis (DVT) is a common problem facing emergency physicians. Timely diagnostic testing must be performed to accurately identify patients with DVT. The purpose of this study was to evaluate the safety and effectiveness of a management strategy that combined consideration of clinical pretest probability and a d-dimer test to evaluate patients presenting to the emergency department with suspected deep vein thrombosis (DVT). A prospective cohort study was performed in the emergency departments of four tertiary care institutions involving 1075 patients with suspected DVT. An emergency physician determined the pretest probability for DVT to be low, moderate, or high using an explicit clinical model. A blood sample was taken for d-dimer testing. Subsequent investigations (compression ultrasound, venography) were performed based upon the pretest probability and the d-dimer result. Patients considered at low pretest probability with negative d-dimer had no further diagnostic testing performed. All patients in whom the diagnosis of DVT was excluded by the algorithm did not receive anticoagulant therapy and were followed up for 90 days for the development of proximal DVT or pulmonary embolism. Overall, 195 (18.1%; 95% CI 15.9% to 20.6%) of 1075 patients were confirmed to have proximal DVT. Of the 882 patients who had proximal DVT excluded during the initial evaluation period using the algorithms, four (0.5%; 95% CI 0.1% to 1.2%) were subsequently diagnosed with proximal DVT in the follow-up period, including three patients in the low pretest probability group (1.0%; 95% CI 0.2% to 2.1%) who had normal d-dimer and no additional diagnostic testing performed. None of the 882 patients (0%: 95% CI 0% to 0.5%) developed pulmonary embolism in the follow-up period. A diagnostic strategy for the evaluation of patients with suspected DVT based on pretest probability and d-dimer is safe and feasible in the emergency department setting. <s> BIB001
A study that investigated the value of combining two diagnostic tests in the diagnosis of deep vein thrombosis (DVT) BIB001 provides a convenient data set to illustrate information theory functions that apply when more than one test is used (see Section 4). A DVT is a blood clot of the deep veins, typically in the lower extremities, that can be fatal if it detaches and travels to the lungs. One of the tests is a clinical index, based on the patient's medical history and physical exam findings, that classified the patient as being at low, moderate, or high risk for a DVT. The other test is a blood test that detects a protein, the d-dimer, that is often elevated in the presence of a DVT. The d-dimer was reported as positive or negative. The number of patients found to be in each of the 3 × 2 test result categories as a function of whether they were ultimately diagnosed with a DVT is shown in Table 3 . Table 3 . Data from Anderson et al. BIB001 . The number of patients with and without a DVT as a function of the clinical index and the d-dimer test. The study included 1057 patients, 190 of whom were diagnosed as having a DVT. Therefore, the probability of being diagnosed with a DVT in this population is 190/1057 = 0.180. The uncertainty about whether a patient randomly selected from this population was diagnosed with a DVT is calculated using the entropy function (Equation (2)). We find that H(D) = 0.680 bits. Given that only two disease states are under consideration, the range of possible entropy values is 0-1 bits.
A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Conclusions <s> ABSTRACTPrecision medicine is a term used to describe individualized treatment that encompasses the use of new diagnostics and therapeutics, targeted to the needs of a patient based on his/her own genetic, biomarker, phenotypic, or psychosocial characteristics. In particular, advances such as cell s <s> BIB001 </s> A Review of the Application of Information Theory to Clinical Diagnostic Testing <s> Conclusions <s> The 'precision medicine (systems medicine)' concept promises to achieve a shift to future healthcare systems with a more proactive and predictive approach to medicine, where the emphasis is on disease prevention rather than the treatment of symptoms. The individualization of treatment for each patient will be at the centre of this approach, with all of a patient's medical data being computationally integrated and accessible. Precision medicine is being rapidly embraced by biomedical researchers, pioneering clinicians and scientific funding programmes in both the European Union (EU) and USA. Precision medicine is a key component of both Horizon 2020 (the EU Framework Programme for Research and Innovation) and the White House's Precision Medicine Initiative. Precision medicine promises to revolutionize patient care and treatment decisions. However, the participants in precision medicine are faced with a considerable central challenge. Greater volumes of data from a wider variety of sources are being generated and analysed than ever before; yet, this heterogeneous information must be integrated and incorporated into personalized predictive models, the output of which must be intelligible to non-computationally trained clinicians. Drawing primarily from the field of 'oncology', this article will introduce key concepts and challenges of precision medicine and some of the approaches currently being implemented to overcome these challenges. Finally, this article also covers the criticisms of precision medicine overpromising on its potential to transform patient care. <s> BIB002
Information statistics have a useful role to play in the evaluation and comparison of diagnostic tests. In some cases, information measures may complement useful concepts such as test sensitivity, test specificity, and predictive values. In other situations, information measures may replace more limited statistics. Mutual information, for example, may be better suited as a single parameter index of diagnostic test performance than alternative statistics. Furthermore, information theory has the potential to help us learn about and teach about the diagnostic process. Examples include concepts illustrated above, including the importance of pretest probability as a determinant of diagnostic information, the amount of information lost by dichotomizing test results, the limited potential of some diagnostic tests to reduce diagnostic uncertainty, and the ways in which diagnostic tests can interact to provide diagnostic information. These are concepts that can all be effectively communicated graphically. It is hoped that this review will help to motivate new applications of information theory to clinical diagnostic testing, especially as data from newer diagnostic technologies becomes available. The challenge will be to develop systems that accurately diagnosis and treat patients by integrating increasingly large amounts of personalized data BIB001 BIB002 . A potential role for information theory functions in this process is suggested by their applicability to multidimensional data.
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> The mathematical problem of approximating one matrix by another of lower rank is closely related to the fundamental postulate of factor-theory. When formulated as a least-squares problem, the normal equations cannot be immediately written down, since the elements of the approximate matrix are not independent of one another. The solution of the problem is simplified by first expressing the matrices in a canonic form. It is found that the problem always has a solution which is usually unique. Several conclusions can be drawn from the form of this solution. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> SUMMARY The detection of atypical observations from multivariate data sets can be enhanced by examining probabilityplotsofMahalanobis squared distances using robust M-estimates of means and of covariances, rather than the usual maximum likelihood estimates. The weights associated with the robust estimation can also be used to indicate atypical observations. For uncontaminated data, the robust estimates are similar to the usual estimates. A procedure for robust principal component analysis is given; it also indicates atypical observations and provides an analysis relatively little influenced by such observations. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> This paper applies statistical physics to the problem of robust principal component analysis (PCA). The commonly used PCA learning rules are first related to energy functions. These functions are generalized by adding a binary decision field with a given prior distribution so that outliers in the data are dealt with explicitly in order to make PCA robust. Each of the generalized energy functions is then used to define a Gibbs distribution from which a marginal distribution is obtained by summing over the binary decision field. The marginal distribution defines an effective energy function, from which self-organizing rules have been developed for robust PCA. Under the presence of outliers, both the standard PCA methods and the existing self-organizing PCA rules studied in the literature of neural networks perform quite poorly. By contrast, the robust rules proposed here resist outliers well and perform excellently for fulfilling various PCA-like tasks such as obtaining the first principal component vector, the first k principal component vectors, and directly finding the subspace spanned by the first k vector principal component vectors without solving for each vector individually. Comparative experiments have been made, and the results show that the authors' robust rules improve the performances of the existing PCA algorithms significantly when outliers are present. > <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> One of the aims of a principal component analysis (PCA) is to reduce the dimensionality of a collection of observations. If we plot the first two principal components of the observations, it is often the case that one can already detect the main structure of the data. Another aim is to detect atypical observations in a graphical way, by looking at outlying observations on the principal axes. <s> BIB004 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> When faced with high-dimensional data, one often uses principal component analysis (PCA) for dimension reduction. Classical PCA constructs a set of uncorrelated variables, which correspond to eigenvectors of the sample covariance matrix. However, it is well-known that this covariance matrix is strongly affected by anomalous observations. It is therefore necessary to apply robust methods that are resistant to possible outliers. ::: ::: Li and Chen [J. Am. Stat. Assoc. 80 (1985) 759] proposed a solution based on projection pursuit (PP). The idea is to search for the direction in which the projected observations have the largest robust scale. In subsequent steps, each new direction is constrained to be orthogonal to all previous directions. This method is very well suited for high-dimensional data, even when the number of variables p is higher than the number of observations n. However, the algorithm of Li and Chen has a high computational cost. In the references [C. Croux, A. Ruiz-Gazen, in COMPSTAT: Proceedings in Computational Statistics 1996, Physica-Verlag, Heidelberg, 1996, pp. 211–217; C. Croux and A. Ruiz-Gazen, High Breakdown Estimators for Principal Components: the Projection-Pursuit Approach Revisited, 2000, submitted for publication.], a computationally much more attractive method is presented, but in high dimensions (large p) it has a numerical accuracy problem and still consumes much computation time. ::: ::: In this paper, we construct a faster two-step algorithm that is more stable numerically. The new algorithm is illustrated on a data set with four dimensions and on two chemometrical data sets with 1200 and 600 dimensions. <s> BIB005 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> This paper studies the problem of recovering a sparse signal x ∈ ℝn from highly corrupted linear measurements y = Ax + e ∈ ℝm, where e is an unknown error vector whose nonzero entries may be unbounded. Motivated by an observation from face recognition in computer vision, this paper proves that for highly correlated (and possibly overcomplete) dictionaries A, any sufficiently sparse signal x can be recovered by solving an l1 -minimization problem min ||x||1 + ||e||1 subject to y = Ax + e. More precisely, if the fraction of the support of the error e is bounded away from one and the support of a: is a very small fraction of the dimension m, then as m becomes large the above l1 -minimization succeeds for all signals x and almost all sign-and-support patterns of e. This result suggests that accurate recovery of sparse signals is possible and computationally feasible even with nearly 100% of the observations corrupted. The proof relies on a careful characterization of the faces of a convex polytope spanned together by the standard crosspolytope and a set of independent identically distributed (i.i.d.) Gaussian vectors with nonzero mean and small variance, dubbed the "cross-and-bouquet" (CAB) model. Simulations and experiments corroborate the findings, and suggest extensions to the result. <s> BIB006 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> This paper has been withdrawn due to a critical error near equation (71). This error causes the entire argument of the paper to collapse. ::: Emmanuel Candes of Stanford discovered the error, and has suggested a correct analysis, which will be reported in a separate publication. <s> BIB007 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> In the recent work of Candes et al, the problem of recovering low rank matrix corrupted by i.i.d. sparse outliers is studied and a very elegant solution, principal component pursuit, is proposed. It is motivated as a tool for video surveillance applications with the background image sequence forming the low rank part and the moving objects/persons/abnormalities forming the sparse part. Each image frame is treated as a column vector of the data matrix made up of a low rank matrix and a sparse corruption matrix. Principal component pursuit solves the problem under the assumptions that the singular vectors of the low rank matrix are spread out and the sparsity pattern of the sparse matrix is uniformly random. However, in practice, usually the sparsity pattern and the signal values of the sparse part (moving persons/objects) change in a correlated fashion over time, for e.g., the object moves slowly and/or with roughly constant velocity. This will often result in a low rank sparse matrix. ::: For video surveillance applications, it would be much more useful to have a real-time solution. In this work, we study the online version of the above problem and propose a solution that automatically handles correlated sparse outliers. The key idea of this work is as follows. Given an initial estimate of the principal directions of the low rank part, we causally keep estimating the sparse part at each time by solving a noisy compressive sensing type problem. The principal directions of the low rank part are updated every-so-often. In between two update times, if new Principal Components' directions appear, the "noise" seen by the Compressive Sensing step may increase. This problem is solved, in part, by utilizing the time correlation model of the low rank part. We call the proposed solution "Real-time Robust Principal Components' Pursuit". <s> BIB008 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces. <s> BIB009 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> We study the basic problem of robust subspace recovery. That is, we assume a data set that some of its points are sampled around a fixed subspace and the rest of them are spread in the whole ambient space, and we aim to recover the fixed underlying subspace. We first estimate"robust inverse sample covariance"by solving a convex minimization procedure; we then recover the subspace by the bottom eigenvectors of this matrix (their number correspond to the number of eigenvalues close to 0). We guarantee exact subspace recovery under some conditions on the underlying data. Furthermore, we propose a fast iterative algorithm, which linearly converges to the matrix minimizing the convex problem. We also quantify the effect of noise and regularization and discuss many other practical and theoretical issues for improving the subspace recovery in various settings. When replacing the sum of terms in the convex energy function (that we minimize) with the sum of squares of terms, we obtain that the new minimizer is a scaled version of the inverse sample covariance (when exists). We thus interpret our minimizer and its subspace (spanned by its bottom eigenvectors) as robust versions of the empirical inverse covariance and the PCA subspace respectively. We compare our method with many other algorithms for robust PCA on synthetic and real data sets and demonstrate state-of-the-art speed and accuracy. <s> BIB010 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> This work studies the recursive robust principal components' analysis(PCA) problem. Here, "robust" refers to robustness to both independent and correlated sparse outliers. If the outlier is the signal-of-interest, this problem can be interpreted as one of recursively recovering a time sequence of sparse vectors, St, in the presence of large but structured noise, Lt. The structure that we assume on Lt is that Lt is dense and lies in a low dimensional subspace that is either fixed or changes "slowly enough". A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background (Lt) from moving foreground objects (St) on-the-fly. To solve the above problem, we introduce a novel solution called Recursive Projected CS (ReProCS). Under mild assumptions, we show that, with high probability (w.h.p.), ReProCS can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value at all times. <s> BIB011 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> In the backbone of large-scale networks, origin-to-destination (OD) traffic flows experience abrupt unusual changes known as traffic volume anomalies, which can result in congestion and limit the extent to which end-user quality of service requirements are met. As a means of maintaining seamless end-user experience in dynamic environments, as well as for ensuring network security, this paper deals with a crucial network monitoring task termed dynamic anomalography. Given link traffic measurements (noisy superpositions of unobserved OD flows) periodically acquired by backbone routers, the goal is to construct an estimated map of anomalies in real time, and thus summarize the network `health state' along both the flow and time dimensions. Leveraging the low intrinsic-dimensionality of OD flows and the sparse nature of anomalies, a novel online estimator is proposed based on an exponentially-weighted least-squares criterion regularized with the sparsity-promoting l1-norm of the anomalies, and the nuclear norm of the nominal traffic matrix. After recasting the non-separable nuclear norm into a form amenable to online optimization, a real-time algorithm for dynamic anomalography is developed and its convergence established under simplifying technical assumptions. For operational conditions where computational complexity reductions are at a premium, a lightweight stochastic gradient algorithm based on Nesterov's acceleration technique is developed as well. Comprehensive numerical tests with both synthetic and real network data corroborate the effectiveness of the proposed online algorithms and their tracking capabilities, and demonstrate that they outperform state-of-the-art approaches developed to diagnose traffic anomalies. <s> BIB012 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? ::: We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^{1.2} r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <s> BIB013 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> This paper designs and extensively evaluates an online algorithm, called practical recursive projected compressive sensing (Prac-ReProCS), for recovering a time sequence of sparse vectors St and a time sequence of dense vectors L ::: t ::: from their sum, M ::: t ::: : = S ::: t ::: + L ::: t ::: , when the L ::: t ::: 's lie in a slowly changing low-dimensional subspace of the full space. A key application where this problem occurs is in real-time video layering where the goal is to separate a video sequence into a slowly changing background sequence and a sparse foreground sequence that consists of one or more moving regions/objects on-the-fly. Prac-ReProCS is a practical modification of its theoretical counterpart which was analyzed in our recent work. Extension to the undersampled case is also developed. Extensive experimental comparisons demonstrating the advantage of the approach for both simulated and real videos, over existing batch and recursive methods, are shown. <s> BIB014 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> Purpose ::: To apply the low-rank plus sparse (L+S) matrix decomposition model to reconstruct undersampled dynamic MRI as a superposition of background and dynamic components in various problems of clinical interest. <s> BIB015 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> In this work, we study the online robust principal components' analysis (RPCA) problem. In recent work, RPCA has been defined as a problem of separating a low-rank matrix (true data), $L$, and a sparse matrix (outliers), $S$, from their sum, $M:=L + S$. A more general version of this problem is to recover $L$ and $S$ from $M:=L + S + W$ where $W$ is the matrix of unstructured small noise/corruptions. An important application where this problem occurs is in video analytics in trying to separate sparse foregrounds (e.g., moving objects) from slowly changing backgrounds. While there has been a large amount of recent work on solutions and guarantees for the batch RPCA problem, the online problem is largely open."Online" RPCA is the problem of doing the above on-the-fly with the extra assumptions that the initial subspace is accurately known and that the subspace from which $l_t$ is generated changes slowly over time. We develop and study a novel "online" RPCA algorithm based on the recently introduced Recursive Projected Compressive Sensing (ReProCS) framework. Our algorithm improves upon the original ReProCS algorithm and it also returns even more accurate offline estimates. The key contribution of this work is a correctness result (complete performance guarantee) for this algorithm under reasonably mild assumptions. By using extra assumptions -- accurate initial subspace knowledge, slow subspace change, and clustered eigenvalues -- we are able to remove one important limitation of batch RPCA results and two key limitations of a recent result for ReProCS for online RPCA. To our knowledge, this work is among the first few correctness results for online RPCA. Most earlier results were only partial results, i.e., they required an assumption on intermediate algorithm estimates. <s> BIB016 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> Video denoising refers to the problem of removing “noise” from a video sequence. Here the term “noise” is used in a broad sense to refer to any corruption or outlier or interference that is not the quantity of interest. In this work, we develop a novel approach to video denoising that is based on the idea that most noisy or corrupted videos can be split into two parts — the approximate “low-rank” layer and the “sparse layer”. We first splitting the given video into these two layers, and then apply an existing state-of-the-art denoising algorithm on each layer. We show, using extensive experiments, that our denoising approach outperforms the state-of-the art denoising algorithms. <s> BIB017 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> Recent years have seen a rapid growth in computational methods for a better understanding of functional connectivity brain networks constructed from neuroimaging data. Most of the current work has been limited to static functional connectivity networks (FCNs), where the relationships between different brain regions is assumed to be stationary. Recent work indicates that functional connectivity is a dynamic process over multiple time scales and the dynamic formation and dissolution of connections plays a key role in cognition, memory, and learning. In the proposed work, we introduce a tensor-based approach for tracking dynamic functional connectivity networks. The proposed framework introduces a robust low-rank+sparse structure learning algorithm for tensors to separate the low-rank community structure of connectivity networks from sparse outliers. The proposed framework is used to both identify change points, where the low-rank community structure of the FCN changes significantly, and summarize this community structure within each time interval. The proposed framework is applied to the study of cognitive control from electroencephalogram data during a Flanker task. <s> BIB018 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> I. INTRODUCTION <s> In this work, we study the robust subspace tracking (RST) problem and obtain one of the first two provable guarantees for it. The goal of RST is to track sequentially arriving data vectors that lie in a slowly changing low-dimensional subspace, while being robust to corruption by additive sparse outliers. It can also be interpreted as a dynamic (time-varying) extension of robust PCA (RPCA), with the minor difference that RST also requires a short tracking delay. We develop a recursive projected compressive sensing algorithm that we call Nearly Optimal RST via ReProCS (ReProCS-NORST) because its tracking delay is nearly optimal. We prove that NORST solves both the RST and the dynamic RPCA problems under weakened standard RPCA assumptions, two simple extra assumptions (slow subspace change and most outlier magnitudes lower bounded), and a few minor assumptions. ::: Our guarantee shows that NORST enjoys a near optimal tracking delay of $O(r \log n \log(1/\epsilon))$. Its required delay between subspace change times is the same, and its memory complexity is $n$ times this value. Thus both these are also nearly optimal. Here $n$ is the ambient space dimension, $r$ is the subspaces' dimension, and $\epsilon$ is the tracking accuracy. NORST also has the best outlier tolerance compared with all previous RPCA or RST methods, both theoretically and empirically (including for real videos), without requiring any model on how the outlier support is generated. This is possible because of the extra assumptions it uses. <s> BIB019
Principal Components Analysis (PCA) is one of the most widely used dimension reduction techniques. It is often the preprocessing step in a variety of scientific and data analytics' applications. Some modern examples include data classification, face recognition, video analytics, recommendation system design and understanding social network dynamics. PCA finds a small number of orthogonal basis vectors along which most of the variability of the dataset lies. Given an n × d data matrix M , r-PCA finds an n×r matrix with orthonormal columnsP M that solves arg minP :P P =I M −PP M 2 . For dimension reduction, one projects M onto span(P M ). By the EckartYoung theorem BIB001 , PCA is easily solved via singular value decomposition (SVD), i.e.,P M is given by the left singular vectors of M corresponding to the largest r singular values (henceforth referred to as "top r singular vectors"). Here and below, denotes matrix transpose and I denotes the identity matrix. The observed data matrix M is usually a noisy version of an unknown true data matrix, which we will denote by L. The real goal is usually to find the principal subspace of L. L is assumed to be either exactly or approximately lowrank. Suppose it is exactly low-rank and let r L denote its rank. If M is a relatively clean version of L,P M is also a good approximation of the principal subspace of L, denoted P . However, if M is a highly noisy version of L or is corrupted by even a few outliers,P M is a bad approximation of P . Here, "good approximation" means, span(P M ) is close to span(P ). Since many modern datasets are acquired using a large number of inexpensive sensors, outliers are becoming even more common in modern datasets. They occur due to various reasons such as node or sensor failures, foreground occlusion of video sequences, or anomalies on certain nodes of a network. This harder problem of PCA for outlier corrupted data is called robust PCA BIB002 , , BIB004 , BIB005 , BIB003 , In recent years, there have been multiple attempts to qualify the term "outlier" leading to various formulations for robust PCA (RPCA). The most popular among these is the idea of modeling outliers as additive sparse corruptions which was popularized in the work of Wright and Ma BIB006 , BIB007 . This models the fact that outliers occur infrequently and only on a few indices of a data vector, but allows them to have any magnitude. Using this, the recent work of Candès, Wright, Li, and Ma BIB007 , BIB009 defined RPCA as the problem of decomposing a given data matrix, M , into the sum of a low rank matrix, L, and a sparse matrix (outliers' matrix), S. The column space of L then gives the PCA solution. While RPCA was formally defined this way first in BIB007 , BIB009 , an earlier solution approach that implicitly used this idea was . Often, for long data sequences, e.g., long surveillance videos, or long dynamic social network connectivity data sequences, if one tries to use a single lower dimensional subspace to represent the entire data sequence, the required subspace dimension may end up being quite large. This is problematic because (i) it means that PCA does not provide sufficient dimension reduction, (ii) the resulting data matrix may not be sufficiently low-rank, and this, in turn, reduces the outlier tolerance of static RPCA solutions, and (iii) it implies increased computational and memory complexity. In this case, a better model is to assume that the data lies in a low-dimensional subspace that can change over time, albeit gradually. The problem of tracking data lying in a (slowly) changing subspace while being robust to additive sparse outliers is referred to as "robust subspace tracking" or "dynamic RPCA" BIB008 , BIB014 , BIB011 , BIB016 , BIB019 . In older work, it was also incorrectly called "recursive robust PCA" or "online robust PCA" . The current article provides a detailed review of the literature on both RPCA and dynamic RPCA (robust subspace tracking) that relies on the above sparse+low-rank matrix decomposition (S+LR) definition. The emphasis is on simple and provably correct approaches. A brief overview of the low-rank matrix completion (MC) literature and of dynamic MC, or, equivalently subspace tracking (ST) with missing data is also provided. MC refers to the problem of completing a low-rank matrix when only a subset of its entries can be observed. We discuss it here because it can be interpreted as a simpler special case of RPCA in which the indices of the outlier corrupted entries are known. A detailed review of MC and the more general problem of low rank matrix recovery from linear measurements is provided in . A detailed discussion of ST (including ST with missing data) is given in . Another way to define the word "outlier" is to assume that either an entire data vector is an outlier or it is an inlier. In modern literature, this is referred to as "robust subspace recovery" BIB010 . This is reviewed in . A magazine-level overview of the entire field of RPCA including robust subspace recovery is provided in . A key motivating application for RPCA and robust subspace tracking (RST) is video layering (decompose a given video into a "background" video and a "foreground" video) , BIB009 . We show an example in Fig. 1a . While this is an easy problem for videos with nearly static backgrounds, the same is not true for dynamically changing backgrounds. For such videos, a good video layering solution can simplify many downstream computer vision and video analytics' tasks. For example, the foreground layer directly provides a video surveillance and an object tracking solution, the background layer and its subspace estimate are directly useful in video background-editing or animation applications; video layering can also enable or improve low-bandwidth video chats (transmit only the layer of interest), layer-wise denoising BIB017 , or foreground activity recognition. RPCA is a good video layering solution when the background changes are gradual (typically valid for static camera videos) and dense (not sparse), while the foreground consists of one or more moving persons/objects that are not too large. An example is background variations due to lights being turned on and off shown in Fig. 3 . With this, the background video (with each image arranged as a column) is well modeled as the dense low rank matrix while the foreground video is well modeled as a sparse matrix with high enough rank. Thus the background corresponds to L, while the difference between foreground and background videos on the foreground support forms the sparse outliers S. Other applications include region of interest detection and tracking from full-sampled or undersampled dynamic MRI sequences BIB015 ; detection of anomalous behavior in dynamic social networks BIB018 , or in computer networks BIB012 ; recommendation system design and survey data analysis when there are outliers due to lazy users and typographical errors BIB009 . A motivating video analytics application for matrix completion is the above problem when the foreground occlusions are easily detectable (e.g., by simple thresholding). Related problems include dense low-rank image or video recovery when some pixels are missing, e.g., due to data transmission errors/erasures (erasures are transmission errors that are reliably detectable so that the missing pixel indices are known); and image/video inpainting when the underlying image or video (with images vectorized as its columns) is well modeled as being dense and low-rank. Another motivating application is recommendation system design (without lazy users) BIB013 . This assumes that user preferences for a class of items, say movies, are governed by a much smaller number of factors than either the total number of users or the total number of movies. The movie ratings' matrix is incomplete since a given user does not rate all movies.
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> D. Robust Subspace Tracking (RST) or dynamic RPCA <s> This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> D. Robust Subspace Tracking (RST) or dynamic RPCA <s> This work studies the recursive robust principal components' analysis(PCA) problem. Here, "robust" refers to robustness to both independent and correlated sparse outliers. If the outlier is the signal-of-interest, this problem can be interpreted as one of recursively recovering a time sequence of sparse vectors, St, in the presence of large but structured noise, Lt. The structure that we assume on Lt is that Lt is dense and lies in a low dimensional subspace that is either fixed or changes "slowly enough". A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background (Lt) from moving foreground objects (St) on-the-fly. To solve the above problem, we introduce a novel solution called Recursive Projected CS (ReProCS). Under mild assumptions, we show that, with high probability (w.h.p.), ReProCS can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value at all times. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> D. Robust Subspace Tracking (RST) or dynamic RPCA <s> We propose a new method for robust PCA -- the task of recovering a low-rank matrix from sparse corruptions that are of unknown value and support. Our method involves alternating between projecting appropriate residuals onto the set of low-rank matrices, and the set of sparse matrices; each projection is {\em non-convex} but easy to compute. In spite of this non-convexity, we establish exact recovery of the low-rank matrix, under the same conditions that are required by existing methods (which are based on convex optimization). For an $m \times n$ input matrix ($m \leq n)$, our method has a running time of $O(r^2mn)$ per iteration, and needs $O(\log(1/\epsilon))$ iterations to reach an accuracy of $\epsilon$. This is close to the running time of simple PCA via the power method, which requires $O(rmn)$ per iteration, and $O(\log(1/\epsilon))$ iterations. In contrast, existing methods for robust PCA, which are based on convex optimization, have $O(m^2n)$ complexity per iteration, and take $O(1/\epsilon)$ iterations, i.e., exponentially more iterations for the same accuracy. ::: Experiments on both synthetic and real data establishes the improved speed and accuracy of our method over existing convex implementations. <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> D. Robust Subspace Tracking (RST) or dynamic RPCA <s> Purpose ::: To apply the low-rank plus sparse (L+S) matrix decomposition model to reconstruct undersampled dynamic MRI as a superposition of background and dynamic components in various problems of clinical interest. <s> BIB004 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> D. Robust Subspace Tracking (RST) or dynamic RPCA <s> In this work, we study the robust subspace tracking (RST) problem and obtain one of the first two provable guarantees for it. The goal of RST is to track sequentially arriving data vectors that lie in a slowly changing low-dimensional subspace, while being robust to corruption by additive sparse outliers. It can also be interpreted as a dynamic (time-varying) extension of robust PCA (RPCA), with the minor difference that RST also requires a short tracking delay. We develop a recursive projected compressive sensing algorithm that we call Nearly Optimal RST via ReProCS (ReProCS-NORST) because its tracking delay is nearly optimal. We prove that NORST solves both the RST and the dynamic RPCA problems under weakened standard RPCA assumptions, two simple extra assumptions (slow subspace change and most outlier magnitudes lower bounded), and a few minor assumptions. ::: Our guarantee shows that NORST enjoys a near optimal tracking delay of $O(r \log n \log(1/\epsilon))$. Its required delay between subspace change times is the same, and its memory complexity is $n$ times this value. Thus both these are also nearly optimal. Here $n$ is the ambient space dimension, $r$ is the subspaces' dimension, and $\epsilon$ is the tracking accuracy. NORST also has the best outlier tolerance compared with all previous RPCA or RST methods, both theoretically and empirically (including for real videos), without requiring any model on how the outlier support is generated. This is possible because of the extra assumptions it uses. <s> BIB005 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> D. Robust Subspace Tracking (RST) or dynamic RPCA <s> Dynamic robust PCA refers to the dynamic (time-varying) extension of the robust PCA (RPCA) problem. It assumes that the true (uncorrupted) data lies in a low-dimensional subspace that can change with time, albeit slowly. The goal is to track this changing subspace over time in the presence of sparse outliers. This work provides the first guarantee for dynamic RPCA that holds under weakened standard RPCA assumptions, slow subspace change and two mild assumptions. We analyze a simple algorithm based on the Recursive Projected Compressive Sensing (ReProCS) framework. Our result is significant because (i) it removes the strong assumptions needed by the two previous complete guarantees for ReProCS-based algorithms; (ii) it shows that it is possible to achieve significantly improved outlier tolerance than all existing provable RPCA methods by exploiting slow subspace change and a lower bound on outlier magnitudes; and (iii) it proves that the proposed algorithm is online, fast, and memory-efficient. <s> BIB006
At each time t, we get a data vector m t ∈ R n that satisfies the first step to simplify many computer vision and video analytics' tasks. We show three frames of a video in the first column. The background images for these frames are shown in the second column. Notice that they all look very similar and hence are well modeled as forming a low rank matrix. The foreground support is shown in the third column. This clearly indicates that the foreground is sparse and changes faster than the background. Result taken from BIB005 , code at https://github.com/praneethmurthy/NORST. (b) Low-rank and sparse matrix decomposition for accelerated dynamic MRI BIB004 . The first column shows three frames of abdomen cine data. The second column shows the slow changing background part of this sequence, while the third column shows the fast changing sparse region of interest (ROI). This is also called the "dynamic component". These are the reconstructed columns obtained from 8-fold undersampled data. They were reconstructed using under-sampled stable PCP BIB004 . where w t is small unstructured noise, s t is the sparse outlier vector, and t is the true data vector that lies in a fixed or slowly changing low-dimensional subspace of R n , i.e., t = P (t) a t where P (t) is an n × r basis matrix 1 with r n and with (I − P (t−1) P (t−1) )P (t) small compared to P (t) = 1. We use T t to denote the support set of s t . Given an initial subspace estimate,P 0 , the goal is to track span(P (t) ) and t either immediately or within a short delay BIB002 , BIB006 , BIB005 . A by-product is that t , s t , and T t can also be tracked on-the-fly. The initial subspace estimate,P 0 , can be computed by using only a few iterations of any of the static RPCA solutions (described below), e.g., PCP BIB001 or AltProj BIB003 , applied to the first t train data samples M [1,ttrain] . Typically, t train = Cr suffices. Alternatively, in some applications, e.g., video surveillance, it is valid to assume that outlier-free data is available. In these situations, simple SVD can be used too. Technically, dynamic RPCA refers to the offline version of the RST problem. Define matrices L, S, W , M with L = [ 1 , 2 , . . . d ] and M , S, W similarly defined. The goal is to recover the matrix L and its column space with error.
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Identifiability and other assumptions <s> This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Identifiability and other assumptions <s> This work studies the recursive robust principal components' analysis(PCA) problem. Here, "robust" refers to robustness to both independent and correlated sparse outliers. If the outlier is the signal-of-interest, this problem can be interpreted as one of recursively recovering a time sequence of sparse vectors, St, in the presence of large but structured noise, Lt. The structure that we assume on Lt is that Lt is dense and lies in a low dimensional subspace that is either fixed or changes "slowly enough". A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background (Lt) from moving foreground objects (St) on-the-fly. To solve the above problem, we introduce a novel solution called Recursive Projected CS (ReProCS). Under mild assumptions, we show that, with high probability (w.h.p.), ReProCS can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value at all times. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Identifiability and other assumptions <s> In this work, we study the online robust principal components' analysis (RPCA) problem. In recent work, RPCA has been defined as a problem of separating a low-rank matrix (true data), $L$, and a sparse matrix (outliers), $S$, from their sum, $M:=L + S$. A more general version of this problem is to recover $L$ and $S$ from $M:=L + S + W$ where $W$ is the matrix of unstructured small noise/corruptions. An important application where this problem occurs is in video analytics in trying to separate sparse foregrounds (e.g., moving objects) from slowly changing backgrounds. While there has been a large amount of recent work on solutions and guarantees for the batch RPCA problem, the online problem is largely open."Online" RPCA is the problem of doing the above on-the-fly with the extra assumptions that the initial subspace is accurately known and that the subspace from which $l_t$ is generated changes slowly over time. We develop and study a novel "online" RPCA algorithm based on the recently introduced Recursive Projected Compressive Sensing (ReProCS) framework. Our algorithm improves upon the original ReProCS algorithm and it also returns even more accurate offline estimates. The key contribution of this work is a correctness result (complete performance guarantee) for this algorithm under reasonably mild assumptions. By using extra assumptions -- accurate initial subspace knowledge, slow subspace change, and clustered eigenvalues -- we are able to remove one important limitation of batch RPCA results and two key limitations of a recent result for ReProCS for online RPCA. To our knowledge, this work is among the first few correctness results for online RPCA. Most earlier results were only partial results, i.e., they required an assumption on intermediate algorithm estimates. <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Identifiability and other assumptions <s> In this work, we study the robust subspace tracking (RST) problem and obtain one of the first two provable guarantees for it. The goal of RST is to track sequentially arriving data vectors that lie in a slowly changing low-dimensional subspace, while being robust to corruption by additive sparse outliers. It can also be interpreted as a dynamic (time-varying) extension of robust PCA (RPCA), with the minor difference that RST also requires a short tracking delay. We develop a recursive projected compressive sensing algorithm that we call Nearly Optimal RST via ReProCS (ReProCS-NORST) because its tracking delay is nearly optimal. We prove that NORST solves both the RST and the dynamic RPCA problems under weakened standard RPCA assumptions, two simple extra assumptions (slow subspace change and most outlier magnitudes lower bounded), and a few minor assumptions. ::: Our guarantee shows that NORST enjoys a near optimal tracking delay of $O(r \log n \log(1/\epsilon))$. Its required delay between subspace change times is the same, and its memory complexity is $n$ times this value. Thus both these are also nearly optimal. Here $n$ is the ambient space dimension, $r$ is the subspaces' dimension, and $\epsilon$ is the tracking accuracy. NORST also has the best outlier tolerance compared with all previous RPCA or RST methods, both theoretically and empirically (including for real videos), without requiring any model on how the outlier support is generated. This is possible because of the extra assumptions it uses. <s> BIB004
The above problem definitions do not ensure identifiability since either of L or S can be both low-rank and sparse. One way to ensure that L is not sparse is by requiring that its left and right singular vectors are dense or "incoherent" w.r.t. a sparse vector BIB001 . We define this below. 1 matrix with mutually orthonormal columns Definition 1.1 (µ-Incoherence/Denseness). We say that an n × r basis matrix (matrix with mutually orthonormal columns) where µ ≥ 1 is called the (in)coherence parameter that quantifies the non-denseness of P . Here P (i) denotes the i-th row of P . We can ensure that S is not low-rank in one of two ways. The first is to impose upper bounds on max-outlier-frac-row and max-outlier-frac-col. The second is to assume that the support set of S is generated uniformly at random (or according to the independent identically distributed (iid) Bernoulli model) and then to just bound the total number of its nonzero entries. The uniform random or iid Bernoulli models ensure roughly equal nonzero entries in each row/column. Consider the Robust Subspace Tracking problem. The most general nonstationary model that allows the subspace to change at each time is not even identifiable since at least r data points are needed to compute an r-dimensional subspace even in the noise-free full data setting. One way BIB002 , BIB003 , BIB004 to ensure identifiability of the changing subspaces is to assume that they are piecewise constant, i.e., that P (t) = P j for all t ∈ [t j , t j+1 ), j = 1, 2, . . . , J, with t 0 = 1 and t J+1 = d. With the above model, in general, r L = rJ (except if subspace directions are repeated more than once, or if only a few subspace directions change at some change times).
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> F. Matrix Completion <s> On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown n x n matrix of low rank r from just about nr log^2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> F. Matrix Completion <s> We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? ::: We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^{1.2} r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> F. Matrix Completion <s> In this paper, we consider the problem of Robust Matrix Completion (RMC) where the goal is to recover a low-rank matrix by observing a small number of its entries out of which a few can be arbitrarily corrupted. We propose a simple projected gradient descent method to estimate the low-rank matrix that alternately performs a projected gradient descent step and cleans up a few of the corrupted entries using hard-thresholding. Our algorithm solves RMC using nearly optimal number of observations as well as nearly optimal number of corruptions. Our result also implies significant improvement over the existing time complexity bounds for the low-rank matrix completion problem. Finally, an application of our result to the robust PCA problem (low-rank+sparse matrix separation) leads to nearly linear time (in matrix dimensions) algorithm for the same; existing state-of-the-art methods require quadratic time. Our empirical results corroborate our theoretical results and show that even for moderate sized problems, our method for robust PCA is an an order of magnitude faster than the existing methods. <s> BIB003
(Low Rank) Matrix Completion (MC) refers to the problem of completing a rank r matrix L from a subset of its entries. We use Ω to refer to the set of indices of the observed entries of L and we use the notation P Ω (M ) to refer to the matrix formed by setting the unobserved entries to zero. Thus, given the goal of MC is to recover L from M . The set Ω is known. To interpret this as a special case of RPCA, notice that one where Ω c refers to the complement of the set Ω. By letting S = −P Ω c (L), this becomes a special case of RPCA. Identifiability. Like RPCA, this problem is also not identifiable in general. For example, if L is low-rank and sparse and if one of its nonzero entries is missing there is no way to "interpolate" the missing entry from the observed entries without extra assumptions. This issue can be resolved by assuming that the left and right singular vectors of L are µ-incoherent as defined above. In fact incoherence was first introduced for the MC problem in BIB002 , and later used for RPCA. Similarly, it is also problematic if the set Ω contains all entries corresponding to just one or two columns (or rows) of L; then, even with the incoherence assumption, it is not possible to correctly "interpolate" all the columns (rows) of L. This problem can be resolved by assuming that Ω is generated uniformly at random (or according to the iid Bernoulli model) with a lower bound on its size. For a detailed discussion of this issue, see BIB002 , BIB001 . "Robust MC" (RMC) or "Robust PCA with Missing Data" , BIB003 is an extension of both RPCA and MC. It involves recovering L from M when M = P Ω (L + S). Thus the entries are corrupted and not all of them are even observed. In this case there is no way to recover S of course. Also, the only problematic outliers are the ones that correspond to the observed entries since M = P Ω (L) + P Ω (S). Dynamic MC is the same as the problem of subspace tracking with missing data (ST-missing). This can be defined in a fashion analogous to the RST problem described above. Similarly for dynamic RMC.
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> G. Other Extensions <s> In this work, we focus on the problem of recursively recovering a time sequence of sparse signals, with time-varying sparsity patterns, from highly undersampled measurements corrupted by very large but correlated noise. It is assumed that the noise is correlated enough to have an approximately low rank covariance matrix that is either constant, or changes slowly, with time. We show how our recently introduced Recursive Projected CS (ReProCS) and modified-ReProCS ideas can be used to solve this problem very effectively. To the best of our knowledge, except for the recent work of dense error correction via l 1 minimization, which can handle another kind of large but “structured” noise (the noise needs to be sparse), none of the other works in sparse recovery have studied the case of any other kind of large noise. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> G. Other Extensions <s> We consider the problem of recovering a matrix M that is the sum of a low-rank matrix L and a sparse matrix S from a small set of linear measurements of the form y = A(M)= A(L + S). This model subsumes three important classes of signal recovery problems: compressive sensing, affine rank minimization, and robust principal component analysis. We propose a natural optimization problem for signal recovery under this model and develop a new greedy algorithm called SpaRCS to solve it. Empirically, SpaRCS inherits a number of desirable properties from the state-of-the-art CoSaMP and ADMiRA algorithms, including exponential convergence and efficient implementation. Simulation results with video compressive sensing, hyperspectral imaging, and robust matrix completion data sets demonstrate both the accuracy and efficacy of the algorithm. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> G. Other Extensions <s> This work studies the recursive robust principal components' analysis(PCA) problem. Here, "robust" refers to robustness to both independent and correlated sparse outliers. If the outlier is the signal-of-interest, this problem can be interpreted as one of recursively recovering a time sequence of sparse vectors, St, in the presence of large but structured noise, Lt. The structure that we assume on Lt is that Lt is dense and lies in a low dimensional subspace that is either fixed or changes "slowly enough". A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background (Lt) from moving foreground objects (St) on-the-fly. To solve the above problem, we introduce a novel solution called Recursive Projected CS (ReProCS). Under mild assumptions, we show that, with high probability (w.h.p.), ReProCS can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value at all times. <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> G. Other Extensions <s> We consider the problem of recovering a target matrix that is a superposition of low-rank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured high-dimensional signals such as videos and hyperspectral images, as well as in the analysis of transformation invariant low-rank recovery. We analyze the performance of the natural convex heuristic for solving this problem, under the assumption that measurements are chosen uniformly at random. We prove that this heuristic exactly recovers low-rank and sparse terms, provided the number of observations exceeds the number of intrinsic degrees of freedom of the component signals by a polylogarithmic factor. Our analysis introduces several ideas that may be of independent interest for the more general problem of compressive sensing of superpositions of structured signals.1 <s> BIB004 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> G. Other Extensions <s> Given the noiseless superposition of a low-rank matrix plus the product of a known fat compression matrix times a sparse matrix, the goal of this paper is to establish deterministic conditions under which exact recovery of the low-rank and sparse components becomes possible. This fundamental identifiability issue arises with traffic anomaly detection in backbone networks, and subsumes compressed sensing as well as the timely low-rank plus sparse matrix recovery tasks encountered in matrix decomposition problems. Leveraging the ability of l1 and nuclear norms to recover sparse and low-rank matrices, a convex program is formulated to estimate the unknowns. Analysis and simulations confirm that the said convex program can recover the unknowns for sufficiently low-rank and sparse enough components, along with a compression matrix possessing an isometry property when restricted to operate on sparse vectors. When the low-rank, sparse, and compression matrices are drawn from certain random ensembles, it is established that exact recovery is possible with high probability. First-order algorithms are developed to solve the nonsmooth convex optimization problem with provable iteration complexity guarantees. Insightful tests with synthetic and real network data corroborate the effectiveness of the novel approach in unveiling traffic anomalies across flows and time, and its ability to outperform existing alternatives. <s> BIB005 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> G. Other Extensions <s> This paper designs and extensively evaluates an online algorithm, called practical recursive projected compressive sensing (Prac-ReProCS), for recovering a time sequence of sparse vectors St and a time sequence of dense vectors L ::: t ::: from their sum, M ::: t ::: : = S ::: t ::: + L ::: t ::: , when the L ::: t ::: 's lie in a slowly changing low-dimensional subspace of the full space. A key application where this problem occurs is in real-time video layering where the goal is to separate a video sequence into a slowly changing background sequence and a sparse foreground sequence that consists of one or more moving regions/objects on-the-fly. Prac-ReProCS is a practical modification of its theoretical counterpart which was analyzed in our recent work. Extension to the undersampled case is also developed. Extensive experimental comparisons demonstrating the advantage of the approach for both simulated and real videos, over existing batch and recursive methods, are shown. <s> BIB006 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> G. Other Extensions <s> Purpose ::: To apply the low-rank plus sparse (L+S) matrix decomposition model to reconstruct undersampled dynamic MRI as a superposition of background and dynamic components in various problems of clinical interest. <s> BIB007
In many of the applications of RPCA, the practical goal is often to find the outlier or the outlier locations (outlier support). For example, this is often the case in the video analytics application. This is also the case in the anomaly detection application. In these situations, robust PCA should really be called "robust sparse recovery", or "sparse recovery in large but structured noise", with "structure" meaning that the noise lie in a fixed or slowly changing low-dimensional subspace BIB003 . Another useful extension is undersampled or compressive RPCA or robust Compressive Sensing (CS) BIB001 , BIB002 , BIB004 , BIB005 , BIB007 . Instead of observing the matrix M , one only has access to a set of m < n random linear projections of each column of M , i.e., to Z = AM where A is a fat matrix. An important application of this setting is in dynamic MRI imaging when the image sequence is modeled as sparse + low-rank BIB007 . An alternative formulation is Robust CS where one observes Z := AS + L BIB001 , BIB006 , BIB002 , BIB005 and the goal is to recover S while being robust to L. This would be dynamic MRI problem if the low rank corruption L is due to measurement noise.
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Principal Component Pursuit (PCP): a convex programming solution <s> One of the aims of a principal component analysis (PCA) is to reduce the dimensionality of a collection of observations. If we plot the first two principal components of the observations, it is often the case that one can already detect the main structure of the data. Another aim is to detect atypical observations in a graphical way, by looking at outlying observations on the principal axes. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Principal Component Pursuit (PCP): a convex programming solution <s> When faced with high-dimensional data, one often uses principal component analysis (PCA) for dimension reduction. Classical PCA constructs a set of uncorrelated variables, which correspond to eigenvectors of the sample covariance matrix. However, it is well-known that this covariance matrix is strongly affected by anomalous observations. It is therefore necessary to apply robust methods that are resistant to possible outliers. ::: ::: Li and Chen [J. Am. Stat. Assoc. 80 (1985) 759] proposed a solution based on projection pursuit (PP). The idea is to search for the direction in which the projected observations have the largest robust scale. In subsequent steps, each new direction is constrained to be orthogonal to all previous directions. This method is very well suited for high-dimensional data, even when the number of variables p is higher than the number of observations n. However, the algorithm of Li and Chen has a high computational cost. In the references [C. Croux, A. Ruiz-Gazen, in COMPSTAT: Proceedings in Computational Statistics 1996, Physica-Verlag, Heidelberg, 1996, pp. 211–217; C. Croux and A. Ruiz-Gazen, High Breakdown Estimators for Principal Components: the Projection-Pursuit Approach Revisited, 2000, submitted for publication.], a computationally much more attractive method is presented, but in high dimensions (large p) it has a numerical accuracy problem and still consumes much computation time. ::: ::: In this paper, we construct a faster two-step algorithm that is more stable numerically. The new algorithm is illustrated on a data set with four dimensions and on two chemometrical data sets with 1200 and 600 dimensions. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Principal Component Pursuit (PCP): a convex programming solution <s> Abstract It is now well-known that one can reconstruct sparse or compressible signals accurately from a very limited number of measurements, possibly contaminated with noise. This technique known as “compressed sensing” or “compressive sampling” relies on properties of the sensing matrix such as the restricted isometry property . In this Note, we establish new results about the accuracy of the reconstruction from undersampled measurements which improve on earlier estimates, and have the advantage of being more elegant. To cite this article: E.J. Candes, C. R. Acad. Sci. Paris, Ser. I 346 (2008). <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Principal Component Pursuit (PCP): a convex programming solution <s> This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces. <s> BIB004 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Principal Component Pursuit (PCP): a convex programming solution <s> Suppose a given observation matrix can be decomposed as the sum of a low-rank matrix and a sparse matrix, and the goal is to recover these individual components from the observed sum. Such additive decompositions have applications in a variety of numerical problems including system identification, latent variable graphical modeling, and principal components analysis. We study conditions under which recovering such a decomposition is possible via a combination of l1 norm and trace norm minimization. We are specifically interested in the question of how many sparse corruptions are allowed so that convex programming can still achieve accurate recovery, and we obtain stronger recovery guarantees than previous studies. Moreover, we do not assume that the spatial pattern of corruptions is random, which stands in contrast to related analyses under such assumptions via matrix completion. <s> BIB005 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Principal Component Pursuit (PCP): a convex programming solution <s> We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? ::: We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^{1.2} r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <s> BIB006
The first provably correct solution to robust PCA via S+LR was introduced in parallel works by Candès, Wright, Li, and Ma BIB004 (where they called it a solution to robust PCA) and by Chandrasekharan et al. . Both proposed to solve the following convex program which was referred to as Principal Component Pursuit (PCP) in BIB004 : Here A vec(1) denotes the vector l 1 norm of the matrix A (sum of absolute values of all its entries) and A * denotes the nuclear norm (sum of its singular values). PCP is the first known polynomial time solution to RPCA that is also provably correct. The two parallel papers BIB004 , used different approaches to arrive at a correctness result for it. The result of BIB005 improved that of . Suppose that PCP can be solved exactly. Denote its solutions byL,Ŝ. The result of BIB004 says the following. 3) support of S is generated uniformly at random, 4) the support size of S, denoted m, and the rank of L, r L , satisfy: 2 , then, with probability at least 1 − cn −10 , the PCP convex program with λ = 1/ min(n, d) returnsL = L andŜ = S. The second condition (strong incoherence) requires that the inner product between a row of U and a row of V be upper bounded. Observe that the required bound is 1/ √ r L times what left and right incoherence would imply (by using CauchySchwartz inequality). This is why it is a stronger requirement. The guarantee of BIB005 , which improved the result of , says the following. We give below a simpler special case of BIB005 Theorem 2] applied with ρ = n/d and for the exact (noise-free) PCP program given above . the parameter λ lies in a certain range BIB001 , thenL = L and S = S. Theorem 2.3 does not assume a model on outlier support, but because of that, it needs a much tighter bound of O(1/r L ) on outlier fractions. Theorem 2.2 assumes uniform random outlier support, along with the support size m bounded by cnd. For large n, d, this is approximately equivalent to allowing max(max-outlier-frac-row, max-outlier-frac-col) ≤ c. This is true because for large n, d, with high probability (w.h.p.), (i) uniform random support with size m is nearly equivalent BIB002 to Bernoulli support with probability of an index being part of the support being ρ = m/(nd) [10, Appendix 7.1]; and (ii) with the Bernoulli model, max(max-outlier-frac-row, max-outlier-frac-col) is close to ρ (follows using Hoeffding inequality for example). Why PCP works. It is well known from compressive sensing literature (and earlier) that the vector l 1 norm serves as a convex surrogate for the support size of a vector, or of a vectorized matrix BIB003 . In a similar fashion, the nuclear norm serves as a convex surrogate for the rank of a matrix , BIB006 . Thus, while the program that tries to minimize the rank ofL and sparsity ofS involves an impractical combinatorial search, PCP is convex and solvable in polynomial time BIB004 .
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Alternating Minimization (AltProj): a non-convex solution <s> One of the aims of a principal component analysis (PCA) is to reduce the dimensionality of a collection of observations. If we plot the first two principal components of the observations, it is often the case that one can already detect the main structure of the data. Another aim is to detect atypical observations in a graphical way, by looking at outlying observations on the principal axes. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Alternating Minimization (AltProj): a non-convex solution <s> In the recent work of Candes et al, the problem of recovering low rank matrix corrupted by i.i.d. sparse outliers is studied and a very elegant solution, principal component pursuit, is proposed. It is motivated as a tool for video surveillance applications with the background image sequence forming the low rank part and the moving objects/persons/abnormalities forming the sparse part. Each image frame is treated as a column vector of the data matrix made up of a low rank matrix and a sparse corruption matrix. Principal component pursuit solves the problem under the assumptions that the singular vectors of the low rank matrix are spread out and the sparsity pattern of the sparse matrix is uniformly random. However, in practice, usually the sparsity pattern and the signal values of the sparse part (moving persons/objects) change in a correlated fashion over time, for e.g., the object moves slowly and/or with roughly constant velocity. This will often result in a low rank sparse matrix. ::: For video surveillance applications, it would be much more useful to have a real-time solution. In this work, we study the online version of the above problem and propose a solution that automatically handles correlated sparse outliers. The key idea of this work is as follows. Given an initial estimate of the principal directions of the low rank part, we causally keep estimating the sparse part at each time by solving a noisy compressive sensing type problem. The principal directions of the low rank part are updated every-so-often. In between two update times, if new Principal Components' directions appear, the "noise" seen by the Compressive Sensing step may increase. This problem is solved, in part, by utilizing the time correlation model of the low rank part. We call the proposed solution "Real-time Robust Principal Components' Pursuit". <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Alternating Minimization (AltProj): a non-convex solution <s> In this work, we focus on the problem of recursively recovering a time sequence of sparse signals, with time-varying sparsity patterns, from highly undersampled measurements corrupted by very large but correlated noise. It is assumed that the noise is correlated enough to have an approximately low rank covariance matrix that is either constant, or changes slowly, with time. We show how our recently introduced Recursive Projected CS (ReProCS) and modified-ReProCS ideas can be used to solve this problem very effectively. To the best of our knowledge, except for the recent work of dense error correction via l 1 minimization, which can handle another kind of large but “structured” noise (the noise needs to be sparse), none of the other works in sparse recovery have studied the case of any other kind of large noise. <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Alternating Minimization (AltProj): a non-convex solution <s> This work studies the recursive robust principal components' analysis(PCA) problem. Here, "robust" refers to robustness to both independent and correlated sparse outliers. If the outlier is the signal-of-interest, this problem can be interpreted as one of recursively recovering a time sequence of sparse vectors, St, in the presence of large but structured noise, Lt. The structure that we assume on Lt is that Lt is dense and lies in a low dimensional subspace that is either fixed or changes "slowly enough". A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background (Lt) from moving foreground objects (St) on-the-fly. To solve the above problem, we introduce a novel solution called Recursive Projected CS (ReProCS). Under mild assumptions, we show that, with high probability (w.h.p.), ReProCS can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value at all times. <s> BIB004 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Alternating Minimization (AltProj): a non-convex solution <s> Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge. In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. $X = UV^\dag$; the algorithm then alternates between finding the best $U$ and the best $V$. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result. In this paper we present first theoretical analysis of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a simpler analysis. <s> BIB005 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Alternating Minimization (AltProj): a non-convex solution <s> We propose a new method for robust PCA -- the task of recovering a low-rank matrix from sparse corruptions that are of unknown value and support. Our method involves alternating between projecting appropriate residuals onto the set of low-rank matrices, and the set of sparse matrices; each projection is {\em non-convex} but easy to compute. In spite of this non-convexity, we establish exact recovery of the low-rank matrix, under the same conditions that are required by existing methods (which are based on convex optimization). For an $m \times n$ input matrix ($m \leq n)$, our method has a running time of $O(r^2mn)$ per iteration, and needs $O(\log(1/\epsilon))$ iterations to reach an accuracy of $\epsilon$. This is close to the running time of simple PCA via the power method, which requires $O(rmn)$ per iteration, and $O(\log(1/\epsilon))$ iterations. In contrast, existing methods for robust PCA, which are based on convex optimization, have $O(m^2n)$ complexity per iteration, and take $O(1/\epsilon)$ iterations, i.e., exponentially more iterations for the same accuracy. ::: Experiments on both synthetic and real data establishes the improved speed and accuracy of our method over existing convex implementations. <s> BIB006 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Alternating Minimization (AltProj): a non-convex solution <s> Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. More than four decades after it was first proposed, the seminal error reduction algorithm of Gerchberg and Saxton and Fienup is still the popular choice for solving many variants of this problem. The algorithm is based on alternating minimization; i.e., it alternates between estimating the missing phase information, and the candidate solution. Despite its wide usage in practice, no global convergence guarantees for this algorithm are known. In this paper, we show that a (resampling) variant of this approach converges geometrically to the solution of one such problem—finding a vector $\bf x$ from ${\bf y}, {\bf A}$ , where ${\bf y} = \vert {\bf A}^T{\bf x}\vert$ and $\vert{\bf z}\vert$ denotes a vector of element-wise magnitudes of ${\bf z}$ —under the assumption that $ {\bf A}$ is Gaussian. Empirically, we demonstrate that alternating minimization performs similar to recently proposed convex techniques for this problem (which are based on “lifting” to a convex matrix problem) in sample complexity and robustness to noise. However, it is much more efficient and can scale to large problems. Analytically, for a resampling version of alternating minimization, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the first theoretical guarantee for alternating minimization (albeit with resampling) for any variant of phase retrieval problems in the non-convex setting. <s> BIB007 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Alternating Minimization (AltProj): a non-convex solution <s> We consider the fundamental problem of solving quadratic systems of equations in $n$ variables, where $y_i = |\langle \boldsymbol{a}_i, \boldsymbol{x} \rangle|^2$, $i = 1, \ldots, m$ and $\boldsymbol{x} \in \mathbb{R}^n$ is unknown. We propose a novel method, which starting with an initial guess computed by means of a spectral method, proceeds by minimizing a nonconvex functional as in the Wirtinger flow approach. There are several key distinguishing features, most notably, a distinct objective functional and novel update rules, which operate in an adaptive fashion and drop terms bearing too much influence on the search direction. These careful selection rules provide a tighter initial guess, better descent directions, and thus enhanced practical performance. On the theoretical side, we prove that for certain unstructured models of quadratic systems, our algorithms return the correct solution in linear time, i.e. in time proportional to reading the data $\{\boldsymbol{a}_i\}$ and $\{y_i\}$ as soon as the ratio $m/n$ between the number of equations and unknowns exceeds a fixed numerical constant. We extend the theory to deal with noisy systems in which we only have $y_i \approx |\langle \boldsymbol{a}_i, \boldsymbol{x} \rangle|^2$ and prove that our algorithms achieve a statistical accuracy, which is nearly un-improvable. We complement our theoretical study with numerical examples showing that solving random quadratic systems is both computationally and statistically not much harder than solving linear systems of the same size---hence the title of this paper. For instance, we demonstrate empirically that the computational cost of our algorithm is about four times that of solving a least-squares problem of the same size. <s> BIB008 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Alternating Minimization (AltProj): a non-convex solution <s> We study the phase retrieval problem, which solves quadratic system of equations, i.e., recovers a vector $\boldsymbol{x}\in \mathbb{R}^n$ from its magnitude measurements $y_i=|\langle \boldsymbol{a}_i, \boldsymbol{x}\rangle|, i=1,..., m$. We develop a gradient-like algorithm (referred to as RWF representing reshaped Wirtinger flow) by minimizing a nonconvex nonsmooth loss function. In comparison with existing nonconvex Wirtinger flow (WF) algorithm \cite{candes2015phase}, although the loss function becomes nonsmooth, it involves only the second power of variable and hence reduces the complexity. We show that for random Gaussian measurements, RWF enjoys geometric convergence to a global optimal point as long as the number $m$ of measurements is on the order of $n$, the dimension of the unknown $\boldsymbol{x}$. This improves the sample complexity of WF, and achieves the same sample complexity as truncated Wirtinger flow (TWF) \cite{chen2015solving}, but without truncation in gradient loop. Furthermore, RWF costs less computationally than WF, and runs faster numerically than both WF and TWF. We further develop the incremental (stochastic) reshaped Wirtinger flow (IRWF) and show that IRWF converges linearly to the true signal. We further establish performance guarantee of an existing Kaczmarz method for the phase retrieval problem based on its connection to IRWF. We also empirically demonstrate that IRWF outperforms existing ITWF algorithm (stochastic version of TWF) as well as other batch algorithms. <s> BIB009 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Alternating Minimization (AltProj): a non-convex solution <s> We develop two iterative algorithms for solving the low rank phase retrieval (LRPR) problem. LRPR refers to recovering a low-rank matrix $\X$ from magnitude-only (phaseless) measurements of random linear projections of its columns. Both methods consist of a spectral initialization step followed by an iterative algorithm to maximize the observed data likelihood. We obtain sample complexity bounds for our proposed initialization approach to provide a good approximation of the true $\X$. When the rank is low enough, these bounds are significantly lower than what existing single vector phase retrieval algorithms need. Via extensive experiments, we show that the same is also true for the proposed complete algorithms. <s> BIB010
Convex optimization programs as solutions to various originally non-convex problems (e.g., robust PCA, sparse recovery, low rank matrix completion, phase retrieval) are, by now, well understood. They are easy to come up with (often), solvable in polynomial time (polynomial in the data size), and allow one to come up with strong guarantees with minimal sample complexity. While polynomial complexity is better than exponential, it is often too slow for today's big datasets. Moreover, the number of iterations needed for a convex program solver to get to within an ball of the true solution of the convex program is O(1/ ) and thus the typical complexity for a PCP solver is O(nd 2 / ) BIB006 . To address this limitation, in more recent works BIB005 , BIB006 , BIB007 , BIB008 , BIB009 , BIB010 , authors have developed provably correct alternating minimization (altmin) or projected gradient descent (GD) solutions that are provably much faster, but still allow for the same type of max(max-outlier-frac-row, max-outlier-frac-col). BIB001 The bounds on λ depend on max(max-outlier-frac-row, max-outlier-frac-col). 5 recovery under one model implies recovery under the other model with same order of probability performance guarantees. Both alt-min and GD have been used for a long time as practical heuristics for trying to solve various non-convex programs. The initialization either came from other prior information, or multiple random initializations were used to run the algorithm and the "best" final output was picked. The new ingredient in these provably correct alt-min or GD solutions is a carefully designed initialization scheme that already outputs an estimate that is "close enough" to the true one. Since these approaches do not use convex programs, they have been labeled as "non-convex" solutions. For RPCA, the first such provably correct solution was Alternating-Projection (AltProj) BIB006 . The idea itself is related to that of an earlier algorithm called GoDec . In fact the recursive projected compressive sensing (ReProCS) BIB002 , BIB003 , BIB004 approach is an even earlier approach that also used a similar idea. AltProj is summarized in Algorithm 1. It alternates between estimating L with S fixed at its previous estimate, followed by projection onto the space of low-rank matrices, and then a similar procedure for S. Theorem 2 of BIB006 says the following. Here max refers to the maximum nonzero entry of the matrix. If AltProj needs time of order O(ndr 2 L log(1/ )) and memory of O(nd) to achieve above error. Notice that even in the W = 0 case the above result only guarantees recovery with error while PCP seems to guarantee "exact" recovery. This guarantee may seem weaker than that for PCP, however it actually is not. The reason is that any solver (the iterative algorithm for finding a solution) of the convex program PCP is only guaranteed to get you within error of the true solution of PCP in a finite number of iterations. Why AltProj works. To understand why AltProj works, consider the rank one case. As also explained in and in the original paper BIB006 , once the largest outliers are removed, it is expected that projecting onto the space of rank one matrices returns a reasonable rank one approximation of L,L 1 . This means that the residual M −L 1 is a better estimate of S than M is. Because of this, it can be shown thatŜ 1 is a better estimate of S thanŜ 0 and so the residual M −Ŝ 1 is a better estimate of L than M −Ŝ 0 . This, in turn, meansL 2 will be a better estimate of L thanL 1 is. The proof that the initial estimate of L is good enough relies on incoherence of left and right singular vectors of L and the fact that no row or column has too many outliers. These two facts are also needed to show that each new estimate is better than the previous. Algorithm 1 AltProj algorithm AltProj for rank-1 matrix L (HT denotes the hard thresholding operator, see discussion in the text): End For For general rank r matrices L: • The algorithm proceeds in r stages and does T iterations for each stage. • Stage 1 is the same as the one above. In stage k, P 1 is replaced by P k (project onto space of rank k matrices). The hard thresholding step uses a threshold of β(σ k+1 + 0.5 t σ k ).
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> C. Memory-Efficient Robust PCA (MERoP) via Recursive Projected Compressive Sensing: a non-convex and online solution <s> In this work, we focus on the problem of recursively recovering a time sequence of sparse signals, with time-varying sparsity patterns, from highly undersampled measurements corrupted by very large but correlated noise. It is assumed that the noise is correlated enough to have an approximately low rank covariance matrix that is either constant, or changes slowly, with time. We show how our recently introduced Recursive Projected CS (ReProCS) and modified-ReProCS ideas can be used to solve this problem very effectively. To the best of our knowledge, except for the recent work of dense error correction via l 1 minimization, which can handle another kind of large but “structured” noise (the noise needs to be sparse), none of the other works in sparse recovery have studied the case of any other kind of large noise. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> C. Memory-Efficient Robust PCA (MERoP) via Recursive Projected Compressive Sensing: a non-convex and online solution <s> This work studies the recursive robust principal components' analysis(PCA) problem. Here, "robust" refers to robustness to both independent and correlated sparse outliers. If the outlier is the signal-of-interest, this problem can be interpreted as one of recursively recovering a time sequence of sparse vectors, St, in the presence of large but structured noise, Lt. The structure that we assume on Lt is that Lt is dense and lies in a low dimensional subspace that is either fixed or changes "slowly enough". A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background (Lt) from moving foreground objects (St) on-the-fly. To solve the above problem, we introduce a novel solution called Recursive Projected CS (ReProCS). Under mild assumptions, we show that, with high probability (w.h.p.), ReProCS can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value at all times. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> C. Memory-Efficient Robust PCA (MERoP) via Recursive Projected Compressive Sensing: a non-convex and online solution <s> Robust PCA (RPCA) is the problem of separating a given data matrix into the sum of a sparse matrix and a low-rank matrix. Static RPCA is the RPCA problem in which the subspace from which the true data is generated remains fixed over time. Dynamic RPCA instead assumes that the subspace can change with time, although usually the changes are slow. We propose a Recursive Projected Compressed Sensing based algorithm called MERoP (Memory-Efficient Robust PCA) to solve the static RPCA problem. A simple extension of MERoP has been shown in our other work to also solve the dynamic RPCA problem. To the best of our knowledge, MERoP is the first online solution for RPCA that is provably correct under mild assumptions on input data and requires no assumption on intermediate algorithm estimates. Moreover, MERoP enjoys nearly-optimal memory complexity and is almost as fast as vanilla SVD. We corroborate our theoretical claims through extensive numerical experiments on both synthetic data and real videos. <s> BIB003
ReProCS BIB003 is an even faster solution than AltProj, that is also online (after a batch initialization applied to the first Cr frames) and memory-efficient. In fact, it has near-optimal memory complexity of O(nr L log n log(1/ )). Its time complexity of just O(ndr L log(1/ )) is the same as that of vanilla r-SVD (simple PCA). Moreover, after initialization, it also has the best outlier tolerance: it tolerates max-outlier-frac-row α ∈ O(1). But the tradeoff is that it needs to assume that (i) all t 's lie in either a fixed subspace or a slowly changing one, and that (ii) (most) outlier magnitudes are lower bounded. As we explain later both are natural assumptions for static camera videos. It relies on the recursive projected compressive sensing (ReProCS) approach BIB001 , BIB002 introduced originally to solve the dynamic RPCA problem. But, equivalently, it also solves the original RPCA problem with the extra assumption that the true data subspace is either fixed or changes slowly. The simplest ReProCS-based algorithm is explained and summarized later as Algorithm 2 (ReProCS-NORST) given later in Sec. III-C. ReProCS starts with a rough estimate of the initial subspace (span(P 0 )), obtainable using a batch technique applied to an initial short sequence. At each time t, it iterates between a robust regression step (that uses columns ofP (t−1) as the regressors) to estimate s t and t , and a subspace update step (updates the subspace estimate every α frames via solving a PCA problem with using the last αˆ t 's as input data).
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> D. Projected Gradient Descent (RPCA-GD): another nonconvex batch solution <s> We consider the problem of Robust PCA in the the fully and partially observed settings. Without corruptions, this is the well-known matrix completion problem. From a statistical standpoint this problem has been recently well-studied, and conditions on when recovery is possible (how many observations do we need, how many corruptions can we tolerate) via polynomial-time algorithms is by now understood. This paper presents and analyzes a non-convex optimization approach that greatly reduces the computational complexity of the above problems, compared to the best available algorithms. In particular, in the fully observed case, with $r$ denoting rank and $d$ dimension, we reduce the complexity from $\mathcal{O}(r^2d^2\log(1/\varepsilon))$ to $\mathcal{O}(rd^2\log(1/\varepsilon))$ -- a big savings when the rank is big. For the partially observed case, we show the complexity of our algorithm is no more than $\mathcal{O}(r^4d \log d \log(1/\varepsilon))$. Not only is this the best-known run-time for a provable algorithm under partial observation, but in the setting where $r$ is small compared to $d$, it also allows for near-linear-in-$d$ run-time that can be exploited in the fully-observed case as well, by simply running our algorithm on a subset of the observations. <s> BIB001
An equally fast, but batch, solution that relied on projected gradient descent (GD), called RPCA-GD, was proposed in recent work BIB001 . If condition numbers are treated as constants, its complexity is O(ndr L log(1/ )). This is the same as that of the ReProCS solution given above and hence also the same as that of vanilla r-SVD for simple PCA. To achieve this complexity, like ReProCS, RPCA-GD also needs an extra assumption: it needs a √ r times tighter bound on outlier fractions than what AltProj needs. Projected GD is a natural heuristic for using GD to solve constrained optimization problems. To solve min x∈C f (x), after each GD step, projected GD projects the output onto the set C before moving on to the next iteration. RPCA-GD rewrites L as L =ŨṼ where whereŨ ,Ṽ are n × r and d × r matrices respectively. At each iteration, RPCA-GD involves one projected GD step forŨ ,Ṽ , and S respectively. ForŨ , V the "projection" is onto the space of µ-incoherent matrices, while for S it is onto the space of sparse matrices. Corollary 1 of BIB001 guarantees the following. RPCA-GD needs time of order O(ndr L log(1/ ))and memory of O(nd) to achieve the above error.
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Modeling time-varying subspaces <s> The mathematical problem of approximating one matrix by another of lower rank is closely related to the fundamental postulate of factor-theory. When formulated as a least-squares problem, the normal equations cannot be immediately written down, since the elements of the approximate matrix are not independent of one another. The solution of the problem is simplified by first expressing the matrices in a canonic form. It is found that the problem always has a solution which is usually unique. Several conclusions can be drawn from the form of this solution. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Modeling time-varying subspaces <s> Subspace estimation plays an important role in a variety of modern signal processing applications. We present a new approach for tracking the signal subspace recursively. It is based on a novel interpretation of the signal subspace as the solution of a projection like unconstrained minimization problem. We show that recursive least squares techniques can be applied to solve this problem by making an appropriate projection approximation. The resulting algorithms have a computational complexity of O(nr) where n is the input vector dimension and r is the number of desired eigencomponents. Simulation results demonstrate that the tracking capability of these algorithms is similar to and in some cases more robust than the computationally expensive batch eigenvalue decomposition. Relations of the new algorithms to other subspace tracking methods and numerical issues are also discussed. > <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Modeling time-varying subspaces <s> Abstract Subspace tracking plays an important role in a variety of adaptive subspace methods. In this paper, we present a theoretical convergence analysis of two recently proposed projection approximation subspace tracking algorithms (PAST and PASTd). By invoking Ljung's ordinary differential equation approach, we derive a pair of coupled matrix differential equations, whose trajectories describe the asymptotic convergence behavior of the subspace tracking algorithms. We discuss properties of the matrix differential equations and determine their asymptotically stable equilibrium states and domain of attraction. It turns out that, under weak conditions, both PAST and PASTd globally converge to the desired signal subspace or signal eigenvectors and eigenvalues with probability one. Numerical examples are also included to illustrate the asymptotic convergence rate of the algorithms. <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Modeling time-varying subspaces <s> We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known. This may be available from prior knowledge. Alternatively, in a problem of recursively reconstructing time sequences of sparse spatial signals, one may use the support estimate from the previous time instant as the “known” part of the support. The idea of our solution (modified-CS) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and whose support contains the smallest number of new additions to the known support. We obtain sufficient conditions for exact reconstruction using modified-CS. These turn out to be much weaker than those needed for CS, particularly when the known part of the support is large compared to the unknown part. <s> BIB004 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Modeling time-varying subspaces <s> Preface. Contributors. Chapter 1 Complex-Valued Adaptive Signal Processing. 1.1 Introduction. 1.2 Preliminaries. 1.3 Optimization in the Complex Domain. 1.4 Widely Linear Adaptive Filtering. 1.5 Nonlinear Adaptive Filtering with Multilayer Perceptrons. 1.6 Complex Independent Component Analysis. 1.7 Summary. 1.8 Acknowledgment. 1.9 Problems. References. Chapter 2 Robust Estimation Techniques for Complex-Valued Random Vectors. 2.1 Introduction. 2.2 Statistical Characterization of Complex Random Vectors. 2.3 Complex Elliptically Symmetric (CES) Distributions. 2.4 Tools to Compare Estimators. 2.5 Scatter and Pseudo-Scatter Matrices. 2.6 Array Processing Examples. 2.7 MVDR Beamformers Based on M -Estimators. 2.8 Robust ICA. 2.9 Conclusion. 2.10 Problems. References. Chapter 3 Turbo Equalization. 3.1 Introduction. 3.2 Context. 3.3 Communication Chain. 3.4 Turbo Decoder: Overview. 3.5 Forward-Backward Algorithm. 3.6 Simplified Algorithm: Interference Canceler. 3.7 Capacity Analysis. 3.8 Blind Turbo Equalization. 3.9 Convergence. 3.10 Multichannel and Multiuser Settings. 3.11 Concluding Remarks. 3.12 Problems. References. Chapter 4 Subspace Tracking for Signal Processing. 4.1 Introduction. 4.2 Linear Algebra Review. 4.3 Observation Model and Problem Statement. 4.4 Preliminary Example: Oja s Neuron. 4.5 Subspace Tracking. 4.6 Eigenvectors Tracking. 4.7 Convergence and Performance Analysis Issues. 4.8 Illustrative Examples. 4.9 Concluding Remarks. 4.10 Problems. References. Chapter 5 Particle Filtering. 5.1 Introduction. 5.2 Motivation for Use of Particle Filtering. 5.3 The Basic Idea. 5.4 The Choice of Proposal Distribution and Resampling. 5.5 Some Particle Filtering Methods. 5.6 Handling Constant Parameters. 5.7 Rao Blackwellization. 5.8 Prediction. 5.9 Smoothing. 5.10 Convergence Issues. 5.11 Computational Issues and Hardware Implementation. 5.12 Acknowledgments. 5.13 Exercises. References. Chapter 6 Nonlinear Sequential State Estimation for Solving Pattern-Classification Problems. 6.1 Introduction. 6.2 Back-Propagation and Support Vector Machine-Learning Algorithms: Review. 6.3 Supervised Training Framework of MLPs Using Nonlinear Sequential State Estimation. 6.4 The Extended Kalman Filter. 6.5 Experimental Comparison of the Extended Kalman Filtering Algorithm with the Back-Propagation and Support Vector Machine Learning Algorithms. 6.6 Concluding Remarks. 6.7 Problems. References. Chapter 7 Bandwidth Extension of Telephony Speech. 7.1 Introduction. 7.2 Organization of the Chapter. 7.3 Nonmodel-Based Algorithms for Bandwidth Extension. 7.4 Basics. 7.5 Model-Based Algorithms for Bandwidth Extension. 7.6 Evaluation of Bandwidth Extension Algorithms. 7.7 Conclusion. 7.8 Problems. References. Index. <s> BIB005 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Modeling time-varying subspaces <s> This work presents GROUSE (Grassmanian Rank-One Update Subspace Estimation), an efficient online algorithm for tracking subspaces from highly incomplete observations. GROUSE requires only basic linear algebraic manipulations at each iteration, and each subspace update can be performed in linear time in the dimension of the subspace. The algorithm is derived by analyzing incremental gradient descent on the Grassmannian manifold of subspaces. With a slight modification, GROUSE can also be used as an online incremental algorithm for the matrix completion problem of imputing missing entries of a low-rank matrix. GROUSE performs exceptionally well in practice both in tracking subspaces and as an online algorithm for matrix completion. <s> BIB006 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Modeling time-varying subspaces <s> This work studies the recursive robust principal components' analysis(PCA) problem. Here, "robust" refers to robustness to both independent and correlated sparse outliers. If the outlier is the signal-of-interest, this problem can be interpreted as one of recursively recovering a time sequence of sparse vectors, St, in the presence of large but structured noise, Lt. The structure that we assume on Lt is that Lt is dense and lies in a low dimensional subspace that is either fixed or changes "slowly enough". A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background (Lt) from moving foreground objects (St) on-the-fly. To solve the above problem, we introduce a novel solution called Recursive Projected CS (ReProCS). Under mild assumptions, we show that, with high probability (w.h.p.), ReProCS can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value at all times. <s> BIB007 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Modeling time-varying subspaces <s> Many real world datasets exhibit an embedding of low-dimensional structure in a high-dimensional manifold. Examples include images, videos and internet traffic data. It is of great significance to estimate and track the low-dimensional structure with small storage requirements and computational complexity when the data dimension is high. Therefore we consider the problem of reconstructing a data stream from a small subset of its entries, where the data is assumed to lie in a low-dimensional linear subspace, possibly corrupted by noise. We further consider tracking the change of the underlying subspace, which can be applied to applications such as video denoising, network monitoring and anomaly detection. Our setting can be viewed as a sequential low-rank matrix completion problem in which the subspace is learned in an online fashion. The proposed algorithm, dubbed Parallel Estimation and Tracking by REcursive Least Squares (PETRELS), first identifies the underlying low-dimensional subspace, and then reconstructs the missing entries via least-squares estimation if required. Subspace identification is performed via a recursive procedure for each row of the subspace matrix in parallel with discounting for previous observations. Numerical examples are provided for direction-of-arrival estimation and matrix completion, comparing PETRELS with state of the art batch algorithms. <s> BIB008 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Modeling time-varying subspaces <s> GROUSE (Grassmannian Rank-One Update Subspace Estimation) is an iterative algorithm for identifying a linear subspace of R^n from data consisting of partial observations of random vectors from that subspace. This paper examines local convergence properties of GROUSE, under assumptions on the randomness of the observed vectors, the randomness of the subset of elements observed at each iteration, and incoherence of the subspace with the coordinate directions. Convergence at an expected linear rate is demonstrated under certain assumptions. The case in which the full random vector is revealed at each iteration allows for much simpler analysis, and is also described. GROUSE is related to incremental SVD methods and to gradient projection algorithms in optimization. <s> BIB009 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Modeling time-varying subspaces <s> In recent work, robust Principal Components Analysis (PCA) has been posed as a problem of recovering a low-rank matrix $\mathbf{L}$ and a sparse matrix $\mathbf{S}$ from their sum, $\mathbf{M}:= \mathbf{L} + \mathbf{S}$ and a provably exact convex optimization solution called PCP has been proposed. This work studies the following problem. Suppose that we have partial knowledge about the column space of the low rank matrix $\mathbf{L}$. Can we use this information to improve the PCP solution, i.e. allow recovery under weaker assumptions? We propose here a simple but useful modification of the PCP idea, called modified-PCP, that allows us to use this knowledge. We derive its correctness result which shows that, when the available subspace knowledge is accurate, modified-PCP indeed requires significantly weaker incoherence assumptions than PCP. Extensive simulations are also used to illustrate this. Comparisons with PCP and other existing work are shown for a stylized real application as well. Finally, we explain how this problem naturally occurs in many applications involving time series data, i.e. in what is called the online or recursive robust PCA problem. A corollary for this case is also given. <s> BIB010 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Modeling time-varying subspaces <s> In this work, we study the online robust principal components' analysis (RPCA) problem. In recent work, RPCA has been defined as a problem of separating a low-rank matrix (true data), $L$, and a sparse matrix (outliers), $S$, from their sum, $M:=L + S$. A more general version of this problem is to recover $L$ and $S$ from $M:=L + S + W$ where $W$ is the matrix of unstructured small noise/corruptions. An important application where this problem occurs is in video analytics in trying to separate sparse foregrounds (e.g., moving objects) from slowly changing backgrounds. While there has been a large amount of recent work on solutions and guarantees for the batch RPCA problem, the online problem is largely open."Online" RPCA is the problem of doing the above on-the-fly with the extra assumptions that the initial subspace is accurately known and that the subspace from which $l_t$ is generated changes slowly over time. We develop and study a novel "online" RPCA algorithm based on the recently introduced Recursive Projected Compressive Sensing (ReProCS) framework. Our algorithm improves upon the original ReProCS algorithm and it also returns even more accurate offline estimates. The key contribution of this work is a correctness result (complete performance guarantee) for this algorithm under reasonably mild assumptions. By using extra assumptions -- accurate initial subspace knowledge, slow subspace change, and clustered eigenvalues -- we are able to remove one important limitation of batch RPCA results and two key limitations of a recent result for ReProCS for online RPCA. To our knowledge, this work is among the first few correctness results for online RPCA. Most earlier results were only partial results, i.e., they required an assumption on intermediate algorithm estimates. <s> BIB011 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> A. Modeling time-varying subspaces <s> In this work, we study the robust subspace tracking (RST) problem and obtain one of the first two provable guarantees for it. The goal of RST is to track sequentially arriving data vectors that lie in a slowly changing low-dimensional subspace, while being robust to corruption by additive sparse outliers. It can also be interpreted as a dynamic (time-varying) extension of robust PCA (RPCA), with the minor difference that RST also requires a short tracking delay. We develop a recursive projected compressive sensing algorithm that we call Nearly Optimal RST via ReProCS (ReProCS-NORST) because its tracking delay is nearly optimal. We prove that NORST solves both the RST and the dynamic RPCA problems under weakened standard RPCA assumptions, two simple extra assumptions (slow subspace change and most outlier magnitudes lower bounded), and a few minor assumptions. ::: Our guarantee shows that NORST enjoys a near optimal tracking delay of $O(r \log n \log(1/\epsilon))$. Its required delay between subspace change times is the same, and its memory complexity is $n$ times this value. Thus both these are also nearly optimal. Here $n$ is the ambient space dimension, $r$ is the subspaces' dimension, and $\epsilon$ is the tracking accuracy. NORST also has the best outlier tolerance compared with all previous RPCA or RST methods, both theoretically and empirically (including for real videos), without requiring any model on how the outlier support is generated. This is possible because of the extra assumptions it uses. <s> BIB012
Even though the RPCA problem has been extensively studied in the last decade (as discussed above), there has been only a small amount of work on provable dynamic RPCA and robust subspace tracking (RST) BIB007 , BIB011 , BIB012 . The subspace tracking (ST) problem (without outliers), and with or without missing data, has been studied for much longer in both the control theory and the adaptive signal processing literature BIB002 , BIB003 , BIB005 , BIB006 , BIB008 , BIB009 . However, all existing guarantees are asymptotic results for the statistically stationary setting of data being generated from a single unknown subspace. Moreover, most of these also make assumptions on intermediate algorithm estimates, see Sec. V-B. Of course, as explained earlier, the most general nonstationary model that allows the subspace to change at each time is not even identifiable since at least r data points are needed to compute an r-dimensional subspace even in the noise-free full data setting. In recent work BIB007 , BIB011 , BIB012 , BIB010 , the authors have made the tracking problem identifiable by assuming a piecewise constant model on subspace change. With this, they are able to show in BIB012 that it is possible to track the changing subspace to within accuracy as long as the subspace remains constant for at least Cr log n log(1/ ) frames at a time, and some other assumptions hold. Here and elsewhere the letters c and C is reused to denote different numerical constants in each use. We describe this work below. B. Modified-PCP: Robust PCA with partial subspace knowledge A simple extension of PCP, called modified-PCP, provides a nice solution to the problem of RPCA with partial subspace knowledge BIB010 . In many applications, e.g., face recognition, some training data for face images taken in controlled environments (with eyeglasses removed and no shadows) is typically available. This allows one to get "an" estimate of the faces' subspace that serves as the partial subspace knowledge. To understand the modified-PCP idea, let G denote the basis matrix for this partial subspace knowledge. If G is such that the matrix (I − GG )L has rank significantly smaller than r L , then the following is a better idea than PCP: The above solution was called modified-PCP because it was inspired by a similar idea, called modified-CS BIB004 , that solves the problem of compressive sensing (or sparse recovery) when partial support knowledge is available. More generally, even if only the approximate rank of (I − GG )L is much smaller, i.e., suppose that L = GA + L new + W where L new has rank r new r L and W F ≤ is small, the following simple change to the same idea works: and outputL = GÂ+L new . We should mention that the same type of idea can also be used to obtain a Modified-AltProj or a Modified-NO-RMC algorithm. As with PCP, these will be significantly faster than the above modified-PCP convex program. When solving the dynamic RPCA problem using modified-PCP, the subspace estimate from the previous set of α frames serves as the partial subspace knowledge for the current set. It is initialized using PCP. It has the following guarantee BIB010 . Theorem 3.7. Consider the dynamic RPCA problem with ) . Recover L j using the column space estimate of L j−1 as the partial subspace knowledge G and solving BIB001 . With probability at least 1 − cn −10 ,L = L if t j 's are known (or detected using the ReProCS idea), and the following hold: 1) L 0 is correctly recovered using PCP 2) The subspace changes as P j = [P j−1 , P j,new ], where P j,new has r new columns that are orthogonal to P j−1 , followed by removing some directions. 3) Left incoherence holds for P j 's; right incoherence holds for V j,new ; and strong incoherence holds for the pair is the matrix of last r new columns of V j . 4) Support of S is generated uniformly at random.
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> C. Recursive Projected Compressive Sensing (ReProCS) <s> This paper has been withdrawn due to a critical error near equation (71). This error causes the entire argument of the paper to collapse. ::: Emmanuel Candes of Stanford discovered the error, and has suggested a correct analysis, which will be reported in a separate publication. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> C. Recursive Projected Compressive Sensing (ReProCS) <s> In the recent work of Candes et al, the problem of recovering low rank matrix corrupted by i.i.d. sparse outliers is studied and a very elegant solution, principal component pursuit, is proposed. It is motivated as a tool for video surveillance applications with the background image sequence forming the low rank part and the moving objects/persons/abnormalities forming the sparse part. Each image frame is treated as a column vector of the data matrix made up of a low rank matrix and a sparse corruption matrix. Principal component pursuit solves the problem under the assumptions that the singular vectors of the low rank matrix are spread out and the sparsity pattern of the sparse matrix is uniformly random. However, in practice, usually the sparsity pattern and the signal values of the sparse part (moving persons/objects) change in a correlated fashion over time, for e.g., the object moves slowly and/or with roughly constant velocity. This will often result in a low rank sparse matrix. ::: For video surveillance applications, it would be much more useful to have a real-time solution. In this work, we study the online version of the above problem and propose a solution that automatically handles correlated sparse outliers. The key idea of this work is as follows. Given an initial estimate of the principal directions of the low rank part, we causally keep estimating the sparse part at each time by solving a noisy compressive sensing type problem. The principal directions of the low rank part are updated every-so-often. In between two update times, if new Principal Components' directions appear, the "noise" seen by the Compressive Sensing step may increase. This problem is solved, in part, by utilizing the time correlation model of the low rank part. We call the proposed solution "Real-time Robust Principal Components' Pursuit". <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> C. Recursive Projected Compressive Sensing (ReProCS) <s> This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces. <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> C. Recursive Projected Compressive Sensing (ReProCS) <s> This work studies the recursive robust principal components' analysis(PCA) problem. Here, "robust" refers to robustness to both independent and correlated sparse outliers. If the outlier is the signal-of-interest, this problem can be interpreted as one of recursively recovering a time sequence of sparse vectors, St, in the presence of large but structured noise, Lt. The structure that we assume on Lt is that Lt is dense and lies in a low dimensional subspace that is either fixed or changes "slowly enough". A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background (Lt) from moving foreground objects (St) on-the-fly. To solve the above problem, we introduce a novel solution called Recursive Projected CS (ReProCS). Under mild assumptions, we show that, with high probability (w.h.p.), ReProCS can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value at all times. <s> BIB004 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> C. Recursive Projected Compressive Sensing (ReProCS) <s> This paper designs and extensively evaluates an online algorithm, called practical recursive projected compressive sensing (Prac-ReProCS), for recovering a time sequence of sparse vectors St and a time sequence of dense vectors L ::: t ::: from their sum, M ::: t ::: : = S ::: t ::: + L ::: t ::: , when the L ::: t ::: 's lie in a slowly changing low-dimensional subspace of the full space. A key application where this problem occurs is in real-time video layering where the goal is to separate a video sequence into a slowly changing background sequence and a sparse foreground sequence that consists of one or more moving regions/objects on-the-fly. Prac-ReProCS is a practical modification of its theoretical counterpart which was analyzed in our recent work. Extension to the undersampled case is also developed. Extensive experimental comparisons demonstrating the advantage of the approach for both simulated and real videos, over existing batch and recursive methods, are shown. <s> BIB005 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> C. Recursive Projected Compressive Sensing (ReProCS) <s> We propose a new method for robust PCA -- the task of recovering a low-rank matrix from sparse corruptions that are of unknown value and support. Our method involves alternating between projecting appropriate residuals onto the set of low-rank matrices, and the set of sparse matrices; each projection is {\em non-convex} but easy to compute. In spite of this non-convexity, we establish exact recovery of the low-rank matrix, under the same conditions that are required by existing methods (which are based on convex optimization). For an $m \times n$ input matrix ($m \leq n)$, our method has a running time of $O(r^2mn)$ per iteration, and needs $O(\log(1/\epsilon))$ iterations to reach an accuracy of $\epsilon$. This is close to the running time of simple PCA via the power method, which requires $O(rmn)$ per iteration, and $O(\log(1/\epsilon))$ iterations. In contrast, existing methods for robust PCA, which are based on convex optimization, have $O(m^2n)$ complexity per iteration, and take $O(1/\epsilon)$ iterations, i.e., exponentially more iterations for the same accuracy. ::: Experiments on both synthetic and real data establishes the improved speed and accuracy of our method over existing convex implementations. <s> BIB006 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> C. Recursive Projected Compressive Sensing (ReProCS) <s> This work studies two interrelated problems - online robust PCA (RPCA) and online low-rank matrix completion (MC). In recent work by Cand\`{e}s et al., RPCA has been defined as a problem of separating a low-rank matrix (true data), $L:=[\ell_1, \ell_2, \dots \ell_{t}, \dots , \ell_{t_{\max}}]$ and a sparse matrix (outliers), $S:=[x_1, x_2, \dots x_{t}, \dots, x_{t_{\max}}]$ from their sum, $M:=L+S$. Our work uses this definition of RPCA. An important application where both these problems occur is in video analytics in trying to separate sparse foregrounds (e.g., moving objects) and slowly changing backgrounds. ::: While there has been a large amount of recent work on both developing and analyzing batch RPCA and batch MC algorithms, the online problem is largely open. In this work, we develop a practical modification of our recently proposed algorithm to solve both the online RPCA and online MC problems. The main contribution of this work is that we obtain correctness results for the proposed algorithms under mild assumptions. The assumptions that we need are: (a) a good estimate of the initial subspace is available (easy to obtain using a short sequence of background-only frames in video surveillance); (b) the $\ell_t$'s obey a `slow subspace change' assumption; (c) the basis vectors for the subspace from which $\ell_t$ is generated are dense (non-sparse); (d) the support of $x_t$ changes by at least a certain amount at least every so often; and (e) algorithm parameters are appropriately set <s> BIB007 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> C. Recursive Projected Compressive Sensing (ReProCS) <s> In this work, we study the online robust principal components' analysis (RPCA) problem. In recent work, RPCA has been defined as a problem of separating a low-rank matrix (true data), $L$, and a sparse matrix (outliers), $S$, from their sum, $M:=L + S$. A more general version of this problem is to recover $L$ and $S$ from $M:=L + S + W$ where $W$ is the matrix of unstructured small noise/corruptions. An important application where this problem occurs is in video analytics in trying to separate sparse foregrounds (e.g., moving objects) from slowly changing backgrounds. While there has been a large amount of recent work on solutions and guarantees for the batch RPCA problem, the online problem is largely open."Online" RPCA is the problem of doing the above on-the-fly with the extra assumptions that the initial subspace is accurately known and that the subspace from which $l_t$ is generated changes slowly over time. We develop and study a novel "online" RPCA algorithm based on the recently introduced Recursive Projected Compressive Sensing (ReProCS) framework. Our algorithm improves upon the original ReProCS algorithm and it also returns even more accurate offline estimates. The key contribution of this work is a correctness result (complete performance guarantee) for this algorithm under reasonably mild assumptions. By using extra assumptions -- accurate initial subspace knowledge, slow subspace change, and clustered eigenvalues -- we are able to remove one important limitation of batch RPCA results and two key limitations of a recent result for ReProCS for online RPCA. To our knowledge, this work is among the first few correctness results for online RPCA. Most earlier results were only partial results, i.e., they required an assumption on intermediate algorithm estimates. <s> BIB008 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> C. Recursive Projected Compressive Sensing (ReProCS) <s> Dynamic robust PCA refers to the dynamic (time-varying) extension of the robust PCA (RPCA) problem. It assumes that the true (uncorrupted) data lies in a low-dimensional subspace that can change with time, albeit slowly. The goal is to track this changing subspace over time in the presence of sparse outliers. This work provides the first guarantee for dynamic RPCA that holds under weakened standard RPCA assumptions, slow subspace change and two mild assumptions. We analyze a simple algorithm based on the Recursive Projected Compressive Sensing (ReProCS) framework. Our result is significant because (i) it removes the strong assumptions needed by the two previous complete guarantees for ReProCS-based algorithms; (ii) it shows that it is possible to achieve significantly improved outlier tolerance than all existing provable RPCA methods by exploiting slow subspace change and a lower bound on outlier magnitudes; and (iii) it proves that the proposed algorithm is online, fast, and memory-efficient. <s> BIB009 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> C. Recursive Projected Compressive Sensing (ReProCS) <s> In this work, we study the robust subspace tracking (RST) problem and obtain one of the first two provable guarantees for it. The goal of RST is to track sequentially arriving data vectors that lie in a slowly changing low-dimensional subspace, while being robust to corruption by additive sparse outliers. It can also be interpreted as a dynamic (time-varying) extension of robust PCA (RPCA), with the minor difference that RST also requires a short tracking delay. We develop a recursive projected compressive sensing algorithm that we call Nearly Optimal RST via ReProCS (ReProCS-NORST) because its tracking delay is nearly optimal. We prove that NORST solves both the RST and the dynamic RPCA problems under weakened standard RPCA assumptions, two simple extra assumptions (slow subspace change and most outlier magnitudes lower bounded), and a few minor assumptions. ::: Our guarantee shows that NORST enjoys a near optimal tracking delay of $O(r \log n \log(1/\epsilon))$. Its required delay between subspace change times is the same, and its memory complexity is $n$ times this value. Thus both these are also nearly optimal. Here $n$ is the ambient space dimension, $r$ is the subspaces' dimension, and $\epsilon$ is the tracking accuracy. NORST also has the best outlier tolerance compared with all previous RPCA or RST methods, both theoretically and empirically (including for real videos), without requiring any model on how the outlier support is generated. This is possible because of the extra assumptions it uses. <s> BIB010
The ReProCS code is at http://www.ece.iastate.edu/~hanguo/ PracReProCS.html#Code_ and the simple version (which comes with guarantees) is at https://github.com/praneethmurthy/ ReProCS. In BIB002 , BIB004 , a novel solution framework called Recursive Projected Compressive Sensing (ReProCS) was introduced to solve the dynamic RPCA and the robust subspace tracking (RST) problems. In later works BIB007 , BIB008 , BIB009 , BIB010 , this was shown to be provably correct under progressively weaker assumptions. Under two simple extra assumptions (beyond those needed by standard RPCA) , after a coarse initialization computed using PCP or AltProj applied to only the first Cr samples, ReProCS-based algorithms provide an online, fast, and very memory-efficient (in fact, nearly memory optimal) tracking solution, that also has significantly improved outlier tolerance compared to all solutions for RPCA. By "tracking solution" we mean that it can detect and track a change in subspace within a short delay. The extra assumptions are: (i) slow subspace change (or fixed subspace); and (ii) outlier magnitudes are either all large enough, or most are large enough and the rest are very small. (i) is a natural assumption for static camera videos since they do not involve sudden scene changes. (ii) is also a simple requirement because, by definition, an "outlier" is a large magnitude corruption. The small magnitude ones can be clubbed with the small noise w t . The other assumptions needed by ReProCS for RST are the same as, or similar to, the standard ones needed for RPCA identifiability (described earlier). The union of the column spans of all the P j 's is equal to the span of the left singular vectors of L. Thus, left incoherence is equivalent to assuming that the P j 's are µ-incoherent. We replace the right singular vectors' incoherence by an independent identically distributed (i.i.d.) assumption BIB001 on the a t 's, along with element-wise boundedness. As explained in BIB010 , the two assumptions are similar; the latter is more suited for RST which involves either frame by frame processing or operations on mini-batches of the full data. Moreover, since RST algorithms are online, we also need to re-define max-outlier-frac-row as follows. Definition 3.8. Let max-outlier-frac-row α be the maximum nonzero fraction per row in any α-consecutive-column submatrix of S. Here α is the mini-batch size used by ReProCS. We use max-outlier-frac-col as defined earlier. Using the outlier support size, it is also equal to max t |T t |/n. We describe here the most recently studied ReProCS algorithm, Nearly Optimal RST via ReProCS or ReProCS-NORST BIB010 . This is also the simplest and has the best guarantees. It starts with a "good" estimate of the initial subspace, which is obtained by C(log r) iterations of AltProj applied to M [1,ttrain] with t train = Cr. It then iterates between (a) Projected Compressive Sensing (approximate Robust Regression) BIB003 in order to estimate the sparse outliers, s t , and then t asˆ t = m t −ŝ t ; and (b) Subspace Update to update the subspace estimateP (t) . Subspace update is done once every α frames. At each update time, it toggles between the "detect" and "update" phases. In the detect phase, the current subspace has been accurately updated, and the algorithm is only checking if the subspace has changed. LetP =P (t−1) . This is done by checking if the maximum singular value of (I −PP )[ˆ t−α+1 ,ˆ t−α+2 , . . . ,ˆ t ] is above a threshold. Suppose the change is detected att j . At this time, the algorithm BIB001 Actually, instead of identically distributed, something weaker suffices: same mean and covariance matrix for all times t is sufficient. BIB003 As explained in detail in , projected CS is equivalent to solving the robust regression (RR) problem with a sparsity model on the outliers. To understand this simply, letP =P (t−1) . The exact version of robust regression assumes that the data vector mt equalsP at + st, while its approximate version assumes that this is only approximately true. SinceP is only an approximation to P (t) , even in the absence of unstructured noise wt, approximate RR is the correct problem to solve for our setting. Approximate RR solves mina,s λ s 1 + mt −P a − s 2 . In this, one can solve for a in closed form to getâ =P (mt − s). Substituting this, approximate RR simplifies to mins s 1 + (I −PP )(mt − s) 2 . This is the same as the Lagrangian version of projected CS. The version for which we obtain guarantees solves mins s 1 s.t. (I −PP )(mt−s) 2 ≤ ξ 2 , but we could also have used other CS results and obtained guarantees for the Lagrangian version with minimal changes. enters the "update" phase. This involves obtaining improved estimates of the new subspace by K steps of r-SVD, each done with a new set of α samples ofˆ t . At t =t j + Kα, the algorithm again enters the "detect" phase. We summarize an easy-to-understand version of the algorithm in Algorithm 2. The simplest projected CS (robust regression) step consists of l 1 minimization followed by support recovery and LS estimation as in Algorithm 2. However, this can be replaced by any other CS solutions, including those that exploit structured sparsity (assuming the outliers have structure). The guarantee for ReProCS-NORST says the following BIB010 . To keep the statement simple, the condition number f of E[a t a t ] is treated as a numerical constant. Theorem 3.9. Consider ReProCS-NORST. Let α := Cr log n, , and let s min := min t min i∈Tt (s t ) i denote the minimum outlier magnitude. Pick an ≤ min(0.01, 0.4 min j SE(P j−1 , P j ) 2 /f ). If 1) P j 's are µ-incoherent; and a t 's are zero mean, mutually independent over time t, have identical covariance matrices, i.e. E[a t a t ] = Λ, are element-wise uncorrelated (Λ is diagonal), are element-wise bounded (for a numerical constant η, (a t ) , and are independent of all outlier supports − , w t 's are zero mean, mutually independent, and independent of s t , t ; 3) max-outlier-frac-col ≤ c 1 /µr, max-outlier-frac-row α ≤ c 2 , 4) subspace change: let ∆ := max j SE(P j−1 , P j ), a) t j+1 − t j > Cr log n log(1/ ), and b) ∆ ≤ 0.8 and and algorithm parameters are appropriately set 12 , then, with probability where K := C log(1/ ). Memory complexity is O(nr log n log(1/ )) and time complexity is O(ndr log(1/ )). Under Theorem 3.9 assumptions, the following also hold: with SE(P (t) , P (t) ) bounded as above. 2) at all times, t,T t = T t , 3) t j ≤t j ≤ t j + 2α, 4) Offline-NORST: SE(P of f line (t) , P (t) ) ≤ , ŝ of f line t − s t = ˆ of f line t − t ≤ t at all t. Remark 3.10. The outlier magnitudes lower bound assumption of Theorem 3.9 can be relaxed to a lower bound on most outlier magnitudes. In particular, the following suffices: assume that s t can be split into s t = (s t ) small + (s t ) large that are BIB002 This can be satisfied by applying C log r iterations of AltProj BIB006 on the first Cr data samples and assuming that these have outlier fractions in any row or column bounded by c/r. BIB005 Need knowledge of λ + , λ − , r, s min . such that, in the k-th subspace update interval, (s t ) small ≤ 0.3 k−1 ( +∆) √ rλ + and the smallest nonzero entry of (s t ) large is larger than C0.3 k−1 ( + ∆) √ rλ + . If there were a way to bound the element-wise error of the CS step (instead of the l 2 norm error), the above requirement could be relaxed further. A key advantage of ReProCS-NORST is that it automatically detects and tracks subspace changes, and both are done relatively quickly. Theorem 3.9 shows that, with high probability (whp), the subspace change gets detected within a delay of at most 2α = C(r log n) time instants, and the subspace gets estimated to error within at most (K + 2)α = C(r log n) log(1/ ) time instants. Observe that both are nearly optimal since r is the number of samples needed to even specify an r-dimensional subspace. The same is also true for the recovery error of s t and t . If offline processing is allowed, with a delay of at most C(r log n) log(1/ ) samples, we can guarantee all recoveries within normalized error . Theorem 3.9 also shows that ReproCS-NORST tolerates a constant maximum fraction of outliers per row (after initialization), without making any assumption on how the outliers are generated. We explain in Sec. III-D why this is possible. This is better than what all other RPCA solutions allow: all either need this fraction to be O(1/r L ) or assume that the outlier support is uniform random. The same is true for its memory complexity which is almost d/r times better than all others. We should clarify that NORST allows the maximum fraction of outliers per row to be O(1) but this does not necessarily imply that the number of outliers in each row can be this high. The reason is it only allows the fraction per column to only be O(1/r). Thus, for a matrix of size n × α, it allows the total number of outliers to be O(min(nα, nα/r)) = O(nα/r).
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> Thus the average fraction allowed is only O(1/r). <s> This work studies the recursive robust principal components' analysis(PCA) problem. Here, "robust" refers to robustness to both independent and correlated sparse outliers. If the outlier is the signal-of-interest, this problem can be interpreted as one of recursively recovering a time sequence of sparse vectors, St, in the presence of large but structured noise, Lt. The structure that we assume on Lt is that Lt is dense and lies in a low dimensional subspace that is either fixed or changes "slowly enough". A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background (Lt) from moving foreground objects (St) on-the-fly. To solve the above problem, we introduce a novel solution called Recursive Projected CS (ReProCS). Under mild assumptions, we show that, with high probability (w.h.p.), ReProCS can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value at all times. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> Thus the average fraction allowed is only O(1/r). <s> We resolve a basic problem on subspace distances that often arises in applications: How can the usual Grassmann distance between equidimensional subspaces be extended to subspaces of different dimensions? We show that a natural solution is given by the distance of a point to a Schubert variety within the Grassmannian. This distance reduces to the Grassmann distance when the subspaces are equidimensional and does not depend on any embedding into a larger ambient space. Furthermore, it has a concrete expression involving principal angles and is efficiently computable in numerically stable ways. Our results are largely independent of the Grassmann distance---if desired, it may be substituted by any other common distances between subspaces. Our approach depends on a concrete algebraic geometric view of the Grassmannian that parallels the differential geometric perspective that is well established in applied and computational mathematics. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> Thus the average fraction allowed is only O(1/r). <s> Given a matrix of observed data, Principal Components Analysis (PCA) computes a small number of orthogonal directions that contain most of its variability. Provably accurate solutions for PCA have been in use for a long time. However, to the best of our knowledge, all existing theoretical guarantees for it assume that the data and the corrupting noise are mutually independent, or at least uncorrelated. This is valid in practice often, but not always. In this paper, we study the PCA problem in the setting where the data and noise can be correlated. Such noise is often also referred to as"data-dependent noise". We obtain a correctness result for the standard eigenvalue decomposition (EVD) based solution to PCA under simple assumptions on the data-noise correlation. We also develop and analyze a generalization of EVD, cluster-EVD, that improves upon EVD in certain regimes. <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> Thus the average fraction allowed is only O(1/r). <s> This work obtains novel finite sample guarantees for Principal Component Analysis (PCA). These hold even when the corrupting noise is non-isotropic, and a part (or all of it) is data-dependent. Because of the latter, in general, the noise and the true data are correlated. The results in this work are a significant improvement over those given in our earlier work where this"correlated-PCA"problem was first studied. In fact, in certain regimes, our results imply that the sample complexity required to achieve subspace recovery error that is a constant fraction of the noise level is near-optimal. Useful corollaries of our result include guarantees for PCA in sparse data-dependent noise and for PCA with missing data. An important application of the former is in proving correctness of the subspace update step of a popular online algorithm for dynamic robust PCA. <s> BIB004 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> Thus the average fraction allowed is only O(1/r). <s> In this work, we study the robust subspace tracking (RST) problem and obtain one of the first two provable guarantees for it. The goal of RST is to track sequentially arriving data vectors that lie in a slowly changing low-dimensional subspace, while being robust to corruption by additive sparse outliers. It can also be interpreted as a dynamic (time-varying) extension of robust PCA (RPCA), with the minor difference that RST also requires a short tracking delay. We develop a recursive projected compressive sensing algorithm that we call Nearly Optimal RST via ReProCS (ReProCS-NORST) because its tracking delay is nearly optimal. We prove that NORST solves both the RST and the dynamic RPCA problems under weakened standard RPCA assumptions, two simple extra assumptions (slow subspace change and most outlier magnitudes lower bounded), and a few minor assumptions. ::: Our guarantee shows that NORST enjoys a near optimal tracking delay of $O(r \log n \log(1/\epsilon))$. Its required delay between subspace change times is the same, and its memory complexity is $n$ times this value. Thus both these are also nearly optimal. Here $n$ is the ambient space dimension, $r$ is the subspaces' dimension, and $\epsilon$ is the tracking accuracy. NORST also has the best outlier tolerance compared with all previous RPCA or RST methods, both theoretically and empirically (including for real videos), without requiring any model on how the outlier support is generated. This is possible because of the extra assumptions it uses. <s> BIB005
ReProCS-NORST has the above advantages only if a few extra assumptions hold. The first is element-wise boundedness of the a t 's. This, along with mutual independence and identical covariance matrices of the a t 's, is similar to the right incoherence assumption needed by all static RPCA methods. To understand this point, see BIB005 . The zero-mean and diagonal Λ assumptions are minor. The main extra requirement is that s min be lower bounded as given in the last two assumptions of Theorem 3.9. The lower bound is reasonable as long as the initial subspace estimate is accurate enough and the subspace changes slowly enough so that both ∆ and SE(P 0 , P 0 ) are O(1/ √ r). This requirement may seem restrictive on first glance but actually is not. The reason is that SE(.) is only measuring the largest principal angle. This bound still allows the chordal distance between the two subspaces to be O(1). Chordal distance BIB002 is the l 2 norm of the vector containing the sine of all principal angles. This can be satisfied by running just C log r iterations of AltProj on a short initial dataset: just t train = Cr frames suffice. Why ReProCS works. Let Ψ := (I −P (t−1)P(t−1) ). As also briefly explained in , BIB005 , it is not hard to see that the "noise" b t := Ψ t seen by the projected CS step is proportional to the error between the subspace estimate from (t − 1) and the current subspace. Moreover, incoherence (denseness) of the P (t) 's and slow subspace change together imply that Ψ satisfies the restricted isometry property (RIP) BIB001 . Using these two facts, a result for noisy l 1 minimization, and the lower bound assumption on outlier magnitudes, one can ensure that the CS step output is accurate enough and the outlier support T t is correctly recovered. With this, it is not hard to see that t = t + w t − e t where e t := s t −ŝ t satisfies e t = I Tt (Ψ Tt Ψ Tt ) −1 I Tt Ψ t and e t ≤ C b t . Consider subspace update. Every time the subspace changes, one can show that the change can be detected within a short delay. After that, the K SVD steps help get progressively improved estimates of the changed subspace. To understand this, observe that, after a subspace change, but before the first update step, b t is the largest and hence, e t , is also the largest for this interval. However, because of good initialization or because of slow subspace change and previous subspace correctly recovered (to error ), neither is too large. Both are proportional to ( + ∆), or to the initialization error. Recall that ∆ quantifies the amount of subspace change. For simplicity suppose that SE(P 0 , P 0 ) = ∆. Using the idea below, we can show that we get a "good" first estimate of the changed subspace. The input to the PCA step isˆ t and the noise seen by it is e t . Notice that e t depends on the true data t and hence this is a setting of PCA in data-dependent noise BIB003 , BIB004 . From BIB004 , it is known that the subspace recovery error of the PCA step is proportional to the ratio between the time-averaged noise power plus time-averaged signal-noise correlation, ( t E[e t e t ] + t E[ t e t )/α, and the minimum signal space eigenvalue, λ − . The instantaneous values of both noise power and signalnoise correlation are of order (∆ + ) times λ + . However, using the fact that e t is sparse with support T t that changes enough over time so that max-outlier-frac-row α is bounded, their time-averaged values are at least √ max-outlier-frac-row α times smaller. This follows using Cauchy-Schwartz. As a result, after the first subspace update, the subspace recovery error is below 4 √ max-outlier-frac-row(λ 2 is bounded by a constant c 2 < 1, this means that, after the first subspace update, the subspace error is below √ c 2 times (∆ + ). This, in turn, implies that b t , and hence e t , is also √ c 2 times smaller in the second subspace update interval compared to the first. This, along with repeating the above argument, helps show that the second estimate of the changed subspace is √ c 2 times better than the first and hence its error is ( √ c 2 ) 2 times (∆ + ). Repeating the argument K times, the K-th estimate has error ( √ c 2 ) K times (∆ + ). Since K = C log(1/ ), this is an accurate estimate of the changed subspace.
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> D. Looser bound on max-outlier-frac-row and outlier magnitudes' lower bound <s> We propose a new method for robust PCA -- the task of recovering a low-rank matrix from sparse corruptions that are of unknown value and support. Our method involves alternating between projecting appropriate residuals onto the set of low-rank matrices, and the set of sparse matrices; each projection is {\em non-convex} but easy to compute. In spite of this non-convexity, we establish exact recovery of the low-rank matrix, under the same conditions that are required by existing methods (which are based on convex optimization). For an $m \times n$ input matrix ($m \leq n)$, our method has a running time of $O(r^2mn)$ per iteration, and needs $O(\log(1/\epsilon))$ iterations to reach an accuracy of $\epsilon$. This is close to the running time of simple PCA via the power method, which requires $O(rmn)$ per iteration, and $O(\log(1/\epsilon))$ iterations. In contrast, existing methods for robust PCA, which are based on convex optimization, have $O(m^2n)$ complexity per iteration, and take $O(1/\epsilon)$ iterations, i.e., exponentially more iterations for the same accuracy. ::: Experiments on both synthetic and real data establishes the improved speed and accuracy of our method over existing convex implementations. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> D. Looser bound on max-outlier-frac-row and outlier magnitudes' lower bound <s> In this work, we study the online robust principal components' analysis (RPCA) problem. In recent work, RPCA has been defined as a problem of separating a low-rank matrix (true data), $L$, and a sparse matrix (outliers), $S$, from their sum, $M:=L + S$. A more general version of this problem is to recover $L$ and $S$ from $M:=L + S + W$ where $W$ is the matrix of unstructured small noise/corruptions. An important application where this problem occurs is in video analytics in trying to separate sparse foregrounds (e.g., moving objects) from slowly changing backgrounds. While there has been a large amount of recent work on solutions and guarantees for the batch RPCA problem, the online problem is largely open."Online" RPCA is the problem of doing the above on-the-fly with the extra assumptions that the initial subspace is accurately known and that the subspace from which $l_t$ is generated changes slowly over time. We develop and study a novel "online" RPCA algorithm based on the recently introduced Recursive Projected Compressive Sensing (ReProCS) framework. Our algorithm improves upon the original ReProCS algorithm and it also returns even more accurate offline estimates. The key contribution of this work is a correctness result (complete performance guarantee) for this algorithm under reasonably mild assumptions. By using extra assumptions -- accurate initial subspace knowledge, slow subspace change, and clustered eigenvalues -- we are able to remove one important limitation of batch RPCA results and two key limitations of a recent result for ReProCS for online RPCA. To our knowledge, this work is among the first few correctness results for online RPCA. Most earlier results were only partial results, i.e., they required an assumption on intermediate algorithm estimates. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> D. Looser bound on max-outlier-frac-row and outlier magnitudes' lower bound <s> In this work, we study the robust subspace tracking (RST) problem and obtain one of the first two provable guarantees for it. The goal of RST is to track sequentially arriving data vectors that lie in a slowly changing low-dimensional subspace, while being robust to corruption by additive sparse outliers. It can also be interpreted as a dynamic (time-varying) extension of robust PCA (RPCA), with the minor difference that RST also requires a short tracking delay. We develop a recursive projected compressive sensing algorithm that we call Nearly Optimal RST via ReProCS (ReProCS-NORST) because its tracking delay is nearly optimal. We prove that NORST solves both the RST and the dynamic RPCA problems under weakened standard RPCA assumptions, two simple extra assumptions (slow subspace change and most outlier magnitudes lower bounded), and a few minor assumptions. ::: Our guarantee shows that NORST enjoys a near optimal tracking delay of $O(r \log n \log(1/\epsilon))$. Its required delay between subspace change times is the same, and its memory complexity is $n$ times this value. Thus both these are also nearly optimal. Here $n$ is the ambient space dimension, $r$ is the subspaces' dimension, and $\epsilon$ is the tracking accuracy. NORST also has the best outlier tolerance compared with all previous RPCA or RST methods, both theoretically and empirically (including for real videos), without requiring any model on how the outlier support is generated. This is possible because of the extra assumptions it uses. <s> BIB003
As noted in BIB001 , solutions for standard RPCA (that only assumes left and right incoherence of L and nothing else) cannot tolerate a bound on outlier fractions in any row or any column that is larger than 1/r L 13 . The reason ReProCS-NORST can tolerate a constant max-outlier-frac-row α bound is because it uses extra assumptions. We explain the need for these here (also see , BIB003 for a brief version of this explanation). ReProCS recovers s t first and then t and does this at each time t. When recovering s t , it exploits "good" knowledge of the subspace of t (either from initialization or from the previous subspace's estimate and slow subspace change), but it has no way to deal with the residual error, b t := (I −P (t−1)P(t−1) ) t , in this knowledge. Since the individual vector b t does not have any structure that can be exploited BIB002 , the error in recovering s t cannot be lower than C b t . This means that, to correctly recover the support of s t , s min needs to be larger than C b t . This is where the s min lower bound comes from. If there were a way to bound the element-wise error of the CS step (instead of the l 2 norm error), we could relax the s min bound significantly. Correct support recovery is needed to ensure that the subspace estimate can be improved with each update. In particular, it helps ensure that the error vectors e t := s t −ŝ t in a given subspace update interval are mutually independent when conditioned on the m t 's from all past intervals. This step also uses element-wise boundedness of the a t 's along with their mutual independence and identical covariances. As noted earlier, these replace the right incoherence assumption.
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> Principal component analysis (PCA) is a well-established technique in image processing and pattern recognition. Incremental PCA and robust PCA are two interesting problems with numerous potential applications. However, these two issues have only been separately addressed in the previous studies. In this paper, we present a novel algorithm for incremental and robust PCA by seamlessly integrating the two issues together. The proposed algorithm has the advantages of both incremental PCA and robust PCA. Moreover, unlike most M-estimation based robust algorithms, it is computational efficient. Experimental results on dynamic background modelling are provided to show the performance of the algorithm with a comparison to the conventional batch-mode and nonrobust algorithms. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> Visual learning is expected to be a continuous and robust process, which treats input images and pixels selectively. In this paper, we present a method for subspace learning, which takes these considerations into account. We present an incremental method, which sequentially updates the principal subspace considering weighted influence of individual images as well as individual pixels within an image. This approach is further extended to enable determination of consistencies in the input data and imputation of the values in inconsistent pixels using the previously acquired knowledge, resulting in a novel incremental, weighted and robust method for subspace learning. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> In the recent work of Candes et al, the problem of recovering low rank matrix corrupted by i.i.d. sparse outliers is studied and a very elegant solution, principal component pursuit, is proposed. It is motivated as a tool for video surveillance applications with the background image sequence forming the low rank part and the moving objects/persons/abnormalities forming the sparse part. Each image frame is treated as a column vector of the data matrix made up of a low rank matrix and a sparse corruption matrix. Principal component pursuit solves the problem under the assumptions that the singular vectors of the low rank matrix are spread out and the sparsity pattern of the sparse matrix is uniformly random. However, in practice, usually the sparsity pattern and the signal values of the sparse part (moving persons/objects) change in a correlated fashion over time, for e.g., the object moves slowly and/or with roughly constant velocity. This will often result in a low rank sparse matrix. ::: For video surveillance applications, it would be much more useful to have a real-time solution. In this work, we study the online version of the above problem and propose a solution that automatically handles correlated sparse outliers. The key idea of this work is as follows. Given an initial estimate of the principal directions of the low rank part, we causally keep estimating the sparse part at each time by solving a noisy compressive sensing type problem. The principal directions of the low rank part are updated every-so-often. In between two update times, if new Principal Components' directions appear, the "noise" seen by the Compressive Sensing step may increase. This problem is solved, in part, by utilizing the time correlation model of the low rank part. We call the proposed solution "Real-time Robust Principal Components' Pursuit". <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> This work presents GROUSE (Grassmanian Rank-One Update Subspace Estimation), an efficient online algorithm for tracking subspaces from highly incomplete observations. GROUSE requires only basic linear algebraic manipulations at each iteration, and each subspace update can be performed in linear time in the dimension of the subspace. The algorithm is derived by analyzing incremental gradient descent on the Grassmannian manifold of subspaces. With a slight modification, GROUSE can also be used as an online incremental algorithm for the matrix completion problem of imputing missing entries of a low-rank matrix. GROUSE performs exceptionally well in practice both in tracking subspaces and as an online algorithm for matrix completion. <s> BIB004 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> In this work, we focus on the problem of recursively recovering a time sequence of sparse signals, with time-varying sparsity patterns, from highly undersampled measurements corrupted by very large but correlated noise. It is assumed that the noise is correlated enough to have an approximately low rank covariance matrix that is either constant, or changes slowly, with time. We show how our recently introduced Recursive Projected CS (ReProCS) and modified-ReProCS ideas can be used to solve this problem very effectively. To the best of our knowledge, except for the recent work of dense error correction via l 1 minimization, which can handle another kind of large but “structured” noise (the noise needs to be sparse), none of the other works in sparse recovery have studied the case of any other kind of large noise. <s> BIB005 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> This work studies the recursive robust principal components' analysis(PCA) problem. Here, "robust" refers to robustness to both independent and correlated sparse outliers. If the outlier is the signal-of-interest, this problem can be interpreted as one of recursively recovering a time sequence of sparse vectors, St, in the presence of large but structured noise, Lt. The structure that we assume on Lt is that Lt is dense and lies in a low dimensional subspace that is either fixed or changes "slowly enough". A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background (Lt) from moving foreground objects (St) on-the-fly. To solve the above problem, we introduce a novel solution called Recursive Projected CS (ReProCS). Under mild assumptions, we show that, with high probability (w.h.p.), ReProCS can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value at all times. <s> BIB006 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> It has recently been shown that only a small number of samples from a low-rank matrix are necessary to reconstruct the entire matrix. We bring this to bear on computer vision problems that utilize low-dimensional subspaces, demonstrating that subsampling can improve computation speed while still allowing for accurate subspace learning. We present GRASTA, Grassmannian Robust Adaptive Subspace Tracking Algorithm, an online algorithm for robust subspace estimation from randomly subsampled data. We consider the specific application of background and foreground separation in video, and we assess GRASTA on separation accuracy and computation time. In one benchmark video example [16], GRASTA achieves a separation rate of 46.3 frames per second, even when run in MATLAB on a personal laptop. <s> BIB007 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> This paper designs and extensively evaluates an online algorithm, called practical recursive projected compressive sensing (Prac-ReProCS), for recovering a time sequence of sparse vectors St and a time sequence of dense vectors L ::: t ::: from their sum, M ::: t ::: : = S ::: t ::: + L ::: t ::: , when the L ::: t ::: 's lie in a slowly changing low-dimensional subspace of the full space. A key application where this problem occurs is in real-time video layering where the goal is to separate a video sequence into a slowly changing background sequence and a sparse foreground sequence that consists of one or more moving regions/objects on-the-fly. Prac-ReProCS is a practical modification of its theoretical counterpart which was analyzed in our recent work. Extension to the undersampled case is also developed. Extensive experimental comparisons demonstrating the advantage of the approach for both simulated and real videos, over existing batch and recursive methods, are shown. <s> BIB008 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> Robust PCA methods are typically based on batch optimization and have to load all the samples into memory during optimization. This prevents them from efficiently processing big data. In this paper, we develop an Online Robust PCA (OR-PCA) that processes one sample per time instance and hence its memory cost is independent of the number of samples, significantly enhancing the computation and storage efficiency. The proposed OR-PCA is based on stochastic optimization of an equivalent reformulation of the batch RPCA. Indeed, we show that OR-PCA provides a sequence of subspace estimations converging to the optimum of its batch counterpart and hence is provably robust to sparse corruption. Moreover, OR-PCA can naturally be applied for tracking dynamic subspace. Comprehensive simulations on subspace recovering and tracking demonstrate the robustness and efficiency advantages of the OR-PCA over online PCA and batch RPCA methods. <s> BIB009 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> An increasing number of methods for background subtraction use Robust PCA to identify sparse foreground objects. While many algorithms use the '1-norm as a con- vex relaxation of the ideal sparsifying function, we approach the problem with a smoothed'p-norm and present pROST, a method for robust online subspace tracking. The algorithm is based on alternating minimization on manifolds. Imple- mented on a graphics processing unit it achieves realtime performance. Experimental results on a state-of-the-art bench- mark for background subtraction on real-world video data indicate that the method succeeds at a broad variety of back- ground subtraction scenarios, and it outperforms competing approaches when video quality is deteriorated by camera jit- <s> BIB010 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> This work studies two interrelated problems - online robust PCA (RPCA) and online low-rank matrix completion (MC). In recent work by Cand\`{e}s et al., RPCA has been defined as a problem of separating a low-rank matrix (true data), $L:=[\ell_1, \ell_2, \dots \ell_{t}, \dots , \ell_{t_{\max}}]$ and a sparse matrix (outliers), $S:=[x_1, x_2, \dots x_{t}, \dots, x_{t_{\max}}]$ from their sum, $M:=L+S$. Our work uses this definition of RPCA. An important application where both these problems occur is in video analytics in trying to separate sparse foregrounds (e.g., moving objects) and slowly changing backgrounds. ::: While there has been a large amount of recent work on both developing and analyzing batch RPCA and batch MC algorithms, the online problem is largely open. In this work, we develop a practical modification of our recently proposed algorithm to solve both the online RPCA and online MC problems. The main contribution of this work is that we obtain correctness results for the proposed algorithms under mild assumptions. The assumptions that we need are: (a) a good estimate of the initial subspace is available (easy to obtain using a short sequence of background-only frames in video surveillance); (b) the $\ell_t$'s obey a `slow subspace change' assumption; (c) the basis vectors for the subspace from which $\ell_t$ is generated are dense (non-sparse); (d) the support of $x_t$ changes by at least a certain amount at least every so often; and (e) algorithm parameters are appropriately set <s> BIB011 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> In this paper, we present a robust online subspace estimation and tracking algorithm (ROSETA) that is capable of identifying and tracking a time-varying low dimensional subspace from incomplete measurements and in the presence of sparse outliers. Our algorithm minimizes a robust l 1 norm cost function between the observed measurements and their projection onto the estimated subspace. The projection coefficients and sparse outliers are computed using ADMM solver and the subspace estimate is updated using a proximal point iteration with adaptive parameter selection. We demonstrate using simulated experiments and a video background subtraction example that ROSETA succeeds in identifying and tracking low dimensional subspaces using fewer iterations than other state of art algorithms. <s> BIB012 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> In this work, we study the online robust principal components' analysis (RPCA) problem. In recent work, RPCA has been defined as a problem of separating a low-rank matrix (true data), $L$, and a sparse matrix (outliers), $S$, from their sum, $M:=L + S$. A more general version of this problem is to recover $L$ and $S$ from $M:=L + S + W$ where $W$ is the matrix of unstructured small noise/corruptions. An important application where this problem occurs is in video analytics in trying to separate sparse foregrounds (e.g., moving objects) from slowly changing backgrounds. While there has been a large amount of recent work on solutions and guarantees for the batch RPCA problem, the online problem is largely open."Online" RPCA is the problem of doing the above on-the-fly with the extra assumptions that the initial subspace is accurately known and that the subspace from which $l_t$ is generated changes slowly over time. We develop and study a novel "online" RPCA algorithm based on the recently introduced Recursive Projected Compressive Sensing (ReProCS) framework. Our algorithm improves upon the original ReProCS algorithm and it also returns even more accurate offline estimates. The key contribution of this work is a correctness result (complete performance guarantee) for this algorithm under reasonably mild assumptions. By using extra assumptions -- accurate initial subspace knowledge, slow subspace change, and clustered eigenvalues -- we are able to remove one important limitation of batch RPCA results and two key limitations of a recent result for ReProCS for online RPCA. To our knowledge, this work is among the first few correctness results for online RPCA. Most earlier results were only partial results, i.e., they required an assumption on intermediate algorithm estimates. <s> BIB013 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> In this work, we study the robust subspace tracking (RST) problem and obtain one of the first two provable guarantees for it. The goal of RST is to track sequentially arriving data vectors that lie in a slowly changing low-dimensional subspace, while being robust to corruption by additive sparse outliers. It can also be interpreted as a dynamic (time-varying) extension of robust PCA (RPCA), with the minor difference that RST also requires a short tracking delay. We develop a recursive projected compressive sensing algorithm that we call Nearly Optimal RST via ReProCS (ReProCS-NORST) because its tracking delay is nearly optimal. We prove that NORST solves both the RST and the dynamic RPCA problems under weakened standard RPCA assumptions, two simple extra assumptions (slow subspace change and most outlier magnitudes lower bounded), and a few minor assumptions. ::: Our guarantee shows that NORST enjoys a near optimal tracking delay of $O(r \log n \log(1/\epsilon))$. Its required delay between subspace change times is the same, and its memory complexity is $n$ times this value. Thus both these are also nearly optimal. Here $n$ is the ambient space dimension, $r$ is the subspaces' dimension, and $\epsilon$ is the tracking accuracy. NORST also has the best outlier tolerance compared with all previous RPCA or RST methods, both theoretically and empirically (including for real videos), without requiring any model on how the outlier support is generated. This is possible because of the extra assumptions it uses. <s> BIB014 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> Robust PCA (RPCA) is the problem of separating a given data matrix into the sum of a sparse matrix and a low-rank matrix. Static RPCA is the RPCA problem in which the subspace from which the true data is generated remains fixed over time. Dynamic RPCA instead assumes that the subspace can change with time, although usually the changes are slow. We propose a Recursive Projected Compressed Sensing based algorithm called MERoP (Memory-Efficient Robust PCA) to solve the static RPCA problem. A simple extension of MERoP has been shown in our other work to also solve the dynamic RPCA problem. To the best of our knowledge, MERoP is the first online solution for RPCA that is provably correct under mild assumptions on input data and requires no assumption on intermediate algorithm estimates. Moreover, MERoP enjoys nearly-optimal memory complexity and is almost as fast as vanilla SVD. We corroborate our theoretical claims through extensive numerical experiments on both synthetic data and real videos. <s> BIB015 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> E. Simple-ReProCS and older ReProCS-based solutions <s> Dynamic robust PCA refers to the dynamic (time-varying) extension of the robust PCA (RPCA) problem. It assumes that the true (uncorrupted) data lies in a low-dimensional subspace that can change with time, albeit slowly. The goal is to track this changing subspace over time in the presence of sparse outliers. This work provides the first guarantee for dynamic RPCA that holds under weakened standard RPCA assumptions, slow subspace change and two mild assumptions. We analyze a simple algorithm based on the Recursive Projected Compressive Sensing (ReProCS) framework. Our result is significant because (i) it removes the strong assumptions needed by the two previous complete guarantees for ReProCS-based algorithms; (ii) it shows that it is possible to achieve significantly improved outlier tolerance than all existing provable RPCA methods by exploiting slow subspace change and a lower bound on outlier magnitudes; and (iii) it proves that the proposed algorithm is online, fast, and memory-efficient. <s> BIB016
The above result for ReProCS-NORST is the best one BIB014 , BIB015 . It improves upon our recent guarantee for simple-ReProCS BIB016 . The first part of the simple-ReProCS algorithm (robust regression step) is the same as ReProCS-NORST. The subspace update step is different. After a subspace change is detected, this involves K steps of projection-SVD or "projection-PCA" BIB006 , each done with a new set of α frames ofˆ t ; followed by an r-SVD based subspace re-estimation step, done with another new set of α frames. The projection-SVD steps are less expensive since they involve a 1-SVD instead of an r-SVD, thus making simple-ReProCS faster. It has the following guarantee BIB016 . Theorem 3.11. Consider simple-ReProCS BIB016 . If • first three assumptions of Theorem 3.9 holds, • subspace change: assume that only one subspace direction changes at each t j , and C √ λ ch ∆+2 √ λ + ≤ s min , where ∆ := max j SE(P j−1 , P j ) and λ ch is the eigenvalue along the changing direction, Then all conclusions of Theorem 3.9 hold. Simple-ReProCS shares most of the advantages of ReProCS-NORST. Its disadvantage is that it requires that, at each change time, only one subspace direction changes. Because of this, even though its tracking delay is the same as that of ReProCS-NORST, it is r-times sub-optimal. Moreover, it needs the initial subspace estimate to be -accurate. The above two guarantees are both a significant improvement upon the earlier partial BIB006 and complete BIB011 , BIB013 guarantees for original-ReProCS. These required a very specific model on outlier support change (instead of just a bound on outlier fractions per row and per column); needed an unrealistic model of subspace change and required the eigenvalues along newly added directions to be small for some time. F. Heuristics for Online RPCA and/or RST Almost all existing literature other than the modified-PCP and ReProCS frameworks described above focus on incremental, online, or streaming solutions for RPCA. Of course any online or incremental RPCA solution will automatically also provide a tracking solution if the underlying subspace is time-varying. Thus, algorithmically, incremental RPCA, online RPCA, or tracking algorithms are the same. The performance metrics for each case are different though. All of these approaches come with either no guarantee or a partial guarantee (the guarantee depends on intermediate algorithm estimates). Early attempts to develop incremental solutions to RPCA that did not explicitly use the S+LR definition include BIB001 , BIB002 . The first online heuristic for the S+LR formulation was called Real-time Robust Principal Components Pursuit (RRPCP) BIB003 . The algorithm name is a misnomer though since the method had nothing to do with PCP which requires solving a convex program. In fact, RRPCP was a precursor to the ReProCS framework BIB005 , BIB008 , BIB006 described above. The first guarantee for a ReProCS algorithm was proved in BIB006 . This was a partial guarantee though (it assumed that intermediate algorithm estimates satisfy certain properties). However, the new proof techniques introduced in this work form the basis of all the later complete guarantees including the ones described above. An online solution that followed soon after was ORPCA BIB009 . This is an online stochastic optimization based solver for the PCP convex program. This also came with only a partial guarantee (the guarantee assumed that the subspace estimate outputted at each time is full rank). Approaches that followed up on the basic ReProCS idea of alternating the approximate Robust Regression (projected Compressive Sensing) and subspace update steps include GRASTA BIB007 , pROST BIB010 and ROSETA BIB012 . GRASTA replaced both the steps by different and approximate versions. It solves the exact version of robust regression which involves recovering a t as arg min a m t −P t−1 a 1 . This approach ignores the fact thatP (t−1) is only an approximation to the current subspace P (t) . This is why, in experiments, GRASTA fails when there are significant subspace changes: it ends up interpreting the subspace tracking error as an outlier. In its subspace update step, the SVD or projected-SVD used in different variants of ReProCS BIB005 , BIB008 , BIB014 are replaced by a faster but approximate subspace tracking algorithm called GROUSE BIB004 that relies on stochastic gradient descent. Both of pROST and ROSETA modify the GRASTA approach, and hence, indirectly rely on the basic ReProCS framework of alternating robust regression and subspace update. pROST replaces l 1 minimization by non-convex l 0 -surrogates. In ROSETA, an ADMM algorithm is used to solve the robust regression.
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> IV. PROS AND CONS OF VARIOUS APPROACHES <s> This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> IV. PROS AND CONS OF VARIOUS APPROACHES <s> In recent work, robust Principal Components Analysis (PCA) has been posed as a problem of recovering a low-rank matrix $\mathbf{L}$ and a sparse matrix $\mathbf{S}$ from their sum, $\mathbf{M}:= \mathbf{L} + \mathbf{S}$ and a provably exact convex optimization solution called PCP has been proposed. This work studies the following problem. Suppose that we have partial knowledge about the column space of the low rank matrix $\mathbf{L}$. Can we use this information to improve the PCP solution, i.e. allow recovery under weaker assumptions? We propose here a simple but useful modification of the PCP idea, called modified-PCP, that allows us to use this knowledge. We derive its correctness result which shows that, when the available subspace knowledge is accurate, modified-PCP indeed requires significantly weaker incoherence assumptions than PCP. Extensive simulations are also used to illustrate this. Comparisons with PCP and other existing work are shown for a stylized real application as well. Finally, we explain how this problem naturally occurs in many applications involving time series data, i.e. in what is called the online or recursive robust PCA problem. A corollary for this case is also given. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> IV. PROS AND CONS OF VARIOUS APPROACHES <s> This work studies two interrelated problems - online robust PCA (RPCA) and online low-rank matrix completion (MC). In recent work by Cand\`{e}s et al., RPCA has been defined as a problem of separating a low-rank matrix (true data), $L:=[\ell_1, \ell_2, \dots \ell_{t}, \dots , \ell_{t_{\max}}]$ and a sparse matrix (outliers), $S:=[x_1, x_2, \dots x_{t}, \dots, x_{t_{\max}}]$ from their sum, $M:=L+S$. Our work uses this definition of RPCA. An important application where both these problems occur is in video analytics in trying to separate sparse foregrounds (e.g., moving objects) and slowly changing backgrounds. ::: While there has been a large amount of recent work on both developing and analyzing batch RPCA and batch MC algorithms, the online problem is largely open. In this work, we develop a practical modification of our recently proposed algorithm to solve both the online RPCA and online MC problems. The main contribution of this work is that we obtain correctness results for the proposed algorithms under mild assumptions. The assumptions that we need are: (a) a good estimate of the initial subspace is available (easy to obtain using a short sequence of background-only frames in video surveillance); (b) the $\ell_t$'s obey a `slow subspace change' assumption; (c) the basis vectors for the subspace from which $\ell_t$ is generated are dense (non-sparse); (d) the support of $x_t$ changes by at least a certain amount at least every so often; and (e) algorithm parameters are appropriately set <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> IV. PROS AND CONS OF VARIOUS APPROACHES <s> In this work, we study the online robust principal components' analysis (RPCA) problem. In recent work, RPCA has been defined as a problem of separating a low-rank matrix (true data), $L$, and a sparse matrix (outliers), $S$, from their sum, $M:=L + S$. A more general version of this problem is to recover $L$ and $S$ from $M:=L + S + W$ where $W$ is the matrix of unstructured small noise/corruptions. An important application where this problem occurs is in video analytics in trying to separate sparse foregrounds (e.g., moving objects) from slowly changing backgrounds. While there has been a large amount of recent work on solutions and guarantees for the batch RPCA problem, the online problem is largely open."Online" RPCA is the problem of doing the above on-the-fly with the extra assumptions that the initial subspace is accurately known and that the subspace from which $l_t$ is generated changes slowly over time. We develop and study a novel "online" RPCA algorithm based on the recently introduced Recursive Projected Compressive Sensing (ReProCS) framework. Our algorithm improves upon the original ReProCS algorithm and it also returns even more accurate offline estimates. The key contribution of this work is a correctness result (complete performance guarantee) for this algorithm under reasonably mild assumptions. By using extra assumptions -- accurate initial subspace knowledge, slow subspace change, and clustered eigenvalues -- we are able to remove one important limitation of batch RPCA results and two key limitations of a recent result for ReProCS for online RPCA. To our knowledge, this work is among the first few correctness results for online RPCA. Most earlier results were only partial results, i.e., they required an assumption on intermediate algorithm estimates. <s> BIB004
We provide a summary of the comparisons in Table I . We discuss our main conclusions here. 1) Outlier tolerance. The PCP (C) and the modified-PCP results allow the loosest upper bounds on max-outlier-frac-row and max-outlier-frac-col, however both allow this only under a uniform random support model. This is a restrictive assumption. For example, for the video application, it requires that video objects are only one or a few pixels wide and jumping around randomly. AltProj, GD, NO-RMC, and ReProCS do not assume any outlier support model. Out of these, GD needs max(max-outlier-frac-row, max-outlier-frac-col) ∈ O(1/r 1.5 L ), AltProj and NO-RMC only need max(max-outlier-frac-row, max-outlier-frac-col) ∈ O(1/r L ), while ReProCS-NORST has the best outlier tolerance of max-outlier-frac-row α ∈ O(1) and max-outlier-frac-col ∈ O(1/r). For the video application, this means that it allows large-sized and/or slow-moving or occasionally static foreground objects much better than all other approaches. Also see Sec. VI. 2) Nearly square data matrix. Only NO-RMC needs this. This can be an unreasonable requirement for videos which often have much fewer frames d than the image size n. NO-RMC needs this because it is actually a robust matrix completion solution; to solve RPCA, it deliberately undersamples the entire data matrix M to get a faster RPCA algorithm. The undersampling necessitates a nearly square matrix. 3) Lower bound on most outlier magnitudes. Only ReProCS requires this extra assumption. This requirement is encoding the fact that outliers are large magnitude corruptions; the small magnitude ones get classified as the unstructured noise w t . As explained earlier in Sec. III-D, ReProCS needs this because it is an online solution that recovers s t 's and their support sets T t , and t 's on a frame by frame basis and updates the subspace once every α = Cr log n frames. 4) Slow subspace change or fixed subspace. Both ReProCS and modified-PCP need this. The modified-PCP requirement is often unrealistic, while that of ReProCS-NORST is simple. It should hold for most static camera videos (videos with no scene changes). 5) Incoherence. All solutions need a form of left and right incoherence. ReProCS replaces the traditional right incoherence assumption with a statistical model on the a t 's. This is needed because it is an online solution that updates the subspace once every α = Cr log n frames using just these many past frames. These help ensure that each update improves the subspace estimate. PCP(C) BIB001 max-outlier-frac-row = O(1) detect/tracking delay Detect delay: Cr log n original-ReProCS BIB003 , BIB004 max-outlier-frac-row ∈ O(1) detect/tracking delay all ReProCS-NORST assumptions Detect delay: Cnr 2 / 2 Modified-PCP BIB002 max-outlier-frac-row ∈ O(1) Speed, memory and other features are as follows. 1) Memory complexity. ReProCS-NORST has the best memory complexity that is also nearly optimal. All other static RPCA solutions need to store the entire matrix. 2) Speed. NO-RMC is the fastest, ReProCS is the second fastest. Both need extra assumptions discussed above. AltProj is the fastest solution that does not need any extra assumptions. 3) Algorithm parameters. PCP is the only approach that needs just one algorithm parameter λ and the PCP (Candès et al) result is the only one that does not assume any model parameter knowledge to set this parameter. Of course PCP is a convex program which needs a solver; the solver itself does have other parameters to set. 4) Detecting and tracking change in subspace. Only ReProCS can do this; ReProCS-NORST is able to do this with near-optimal delay.
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> V. STATIC AND DYNAMIC MATRIX COMPLETION A. Matrix Completion <s> We present novel techniques for analyzing the problem of low-rank matrix recovery. The methods are both considerably simpler and more general than previous approaches. It is shown that an unknown matrix of rank can be efficiently reconstructed from only randomly sampled expansion coefficients with respect to any given matrix basis. The number quantifies the “degree of incoherence” between the unknown matrix and the basis. Existing work concentrated mostly on the problem of “matrix completion” where one aims to recover a low-rank matrix from randomly selected matrix elements. Our result covers this situation as a special case. The proof consists of a series of relatively elementary steps, which stands in contrast to the highly involved methods previously employed to obtain comparable results. In cases where bounds had been known before, our estimates are slightly tighter. We discuss operator bases which are incoherent to all low-rank matrices simultaneously. For these bases, we show that randomly sampled expansion coefficients suffice to recover any low-rank matrix with high probability. The latter bound is tight up to multiplicative constants. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> V. STATIC AND DYNAMIC MATRIX COMPLETION A. Matrix Completion <s> This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candes and Recht, Candes and Tao, and Keshavan, Montanari, and Oh. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> V. STATIC AND DYNAMIC MATRIX COMPLETION A. Matrix Completion <s> Let M be an n? × n matrix of rank r, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm, which we call OptSpace, that reconstructs M from |E| = O(rn) observed entries with relative root mean square error 1/2 RMSE ? C(?) (nr/|E|)1/2 with probability larger than 1 - 1/n3. Further, if r = O(1) and M is sufficiently unstructured, then OptSpace reconstructs it exactly from |E| = O(n log n) entries with probability larger than 1 - 1/n3. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log n), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices. <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> V. STATIC AND DYNAMIC MATRIX COMPLETION A. Matrix Completion <s> We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? ::: We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^{1.2} r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. <s> BIB004 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> V. STATIC AND DYNAMIC MATRIX COMPLETION A. Matrix Completion <s> Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge. In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. $X = UV^\dag$; the algorithm then alternates between finding the best $U$ and the best $V$. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result. In this paper we present first theoretical analysis of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a simpler analysis. <s> BIB005 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> V. STATIC AND DYNAMIC MATRIX COMPLETION A. Matrix Completion <s> We give the first algorithm for Matrix Completion whose running time and sample complexity is polynomial in the rank of the unknown target matrix, linear in the dimension of the matrix, and logarithmic in the condition number of the matrix. To the best of our knowledge, all previous algorithms either incurred a quadratic dependence on the condition number of the unknown matrix or a quadratic dependence on the dimension of the matrix in the running time. Our algorithm is based on a novel extension of Alternating Minimization which we show has theoretical guarantees under standard assumptions even in the presence of noise. <s> BIB006 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> V. STATIC AND DYNAMIC MATRIX COMPLETION A. Matrix Completion <s> We propose a new method for robust PCA -- the task of recovering a low-rank matrix from sparse corruptions that are of unknown value and support. Our method involves alternating between projecting appropriate residuals onto the set of low-rank matrices, and the set of sparse matrices; each projection is {\em non-convex} but easy to compute. In spite of this non-convexity, we establish exact recovery of the low-rank matrix, under the same conditions that are required by existing methods (which are based on convex optimization). For an $m \times n$ input matrix ($m \leq n)$, our method has a running time of $O(r^2mn)$ per iteration, and needs $O(\log(1/\epsilon))$ iterations to reach an accuracy of $\epsilon$. This is close to the running time of simple PCA via the power method, which requires $O(rmn)$ per iteration, and $O(\log(1/\epsilon))$ iterations. In contrast, existing methods for robust PCA, which are based on convex optimization, have $O(m^2n)$ complexity per iteration, and take $O(1/\epsilon)$ iterations, i.e., exponentially more iterations for the same accuracy. ::: Experiments on both synthetic and real data establishes the improved speed and accuracy of our method over existing convex implementations. <s> BIB007 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> V. STATIC AND DYNAMIC MATRIX COMPLETION A. Matrix Completion <s> Matrix factorization is a popular approach for large-scale matrix completion. The optimization formulation based on matrix factorization, even with huge size, can be solved very efficiently through the standard optimization algorithms in practice. However, due to the non-convexity caused by the factorization model, there is a limited theoretical understanding of whether these algorithms will generate a good solution. In this paper, we establish a theoretical guarantee for the factorization-based formulation to correctly recover the underlying low-rank matrix. In particular, we show that under similar conditions to those in previous works, many standard optimization algorithms converge to the global optima of a factorization-based formulation and recover the true low-rank matrix. We study the local geometry of a properly regularized objective and prove that any stationary point in a certain local region is globally optimal. A major difference of this paper from the existing results is that we do not need resampling (i.e., using independent samples at each iteration) in either the algorithm or its analysis. <s> BIB008 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> V. STATIC AND DYNAMIC MATRIX COMPLETION A. Matrix Completion <s> In this note, we focus on smooth nonconvex optimization problems that obey: (1) all local minimizers are also global; and (2) around any saddle point or local maximizer, the objective has a negative directional curvature. Concrete applications such as dictionary learning, generalized phase retrieval, and orthogonal tensor decomposition are known to induce such structures. We describe a second-order trust-region algorithm that provably converges to a global minimizer efficiently, without special initializations. Finally we highlight alternatives, and open problems in this direction. <s> BIB009 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> V. STATIC AND DYNAMIC MATRIX COMPLETION A. Matrix Completion <s> Matrix completion is a basic machine learning problem that has wide applications, especially in collaborative filtering and recommender systems. Simple non-convex optimization algorithms are popular and effective in practice. Despite recent progress in proving various non-convex algorithms converge from a good initial point, it remains unclear why random or arbitrary initialization suffices in practice. We prove that the commonly used non-convex objective function for matrix completion has no spurious local minima -- all local minima must also be global. Therefore, many popular optimization algorithms such as (stochastic) gradient descent can provably solve matrix completion with \textit{arbitrary} initialization in polynomial time. <s> BIB010 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> V. STATIC AND DYNAMIC MATRIX COMPLETION A. Matrix Completion <s> In this paper we develop a new framework that captures the common landscape underlying the common non-convex low-rank matrix problems including matrix sensing, matrix completion and robust PCA. In particular, we show for all above problems (including asymmetric cases): 1) all local minima are also globally optimal; 2) no high-order saddle points exists. These results explain why simple algorithms such as stochastic gradient descent have global converge, and efficiently optimize these non-convex objective functions in practice. Our framework connects and simplifies the existing analyses on optimization landscapes for matrix sensing and symmetric matrix completion. The framework naturally leads to new results for asymmetric matrix completion and robust PCA. <s> BIB011
Nuclear norm minimization. The first solution to matrix completion was nuclear norm minimization (NNM) . This was a precursor to PCP and works for the same reasons that PCP works (see Sec. II-A). The first guarantee for NNM appeared in BIB004 . This result was improved and its proof simplified in BIB001 , BIB002 . Faster solutions: alternating minimization and gradient descent. Like PCP, NNM is slow. To address this, alternating minimization and gradient descent solutions were developed in BIB003 and BIB005 (and many later works) along with a carefully designed spectral initialization that works as follows: computeÛ 0,temp as the matrix of top r singular vectors of M (recall M := P Ω (L)) and then computeÛ 0 as its "clipped" version as follows: zero out all entries ofÛ 0,temp that have magnitude more than 2µ r/n followed by orthonormalizing the resulting matrix. Here clipping is used to ensure incoherence holds for the initialization of U . After this, the alt-min solution BIB005 alternatively minimizes ||P Ω2j+1 (L) − P Ω2j+1 (ŨṼ )|| 2 F over U ,Ṽ . With keeping one of them fixed, this is clearly a least squares problem which can be solved efficiently. Here, Ω j is an independently sampled set of entries from the original observed set Ω. Thus, each iteration uses a different set of samples. This solution has the following guarantee [41, Theorem 2.5]: Theorem 5.12. Let L SVD = U ΣV be its reduced SVD. Consider the alt-min solution BIB005 . If 1) U is µ-incoherent, V is µ-incoherent, 2) alt-min is run for T = C log(||L|| F / ) iterations, 3) the 2T + 1 sets Ω j are generated independently from Ω 4) each entry of Ω is generated iid with probability p that satisfies pnd ≥ Cκ 3 µ 2 nr 3.5 log n log(||L|| F / ) then, with probability at least 1 − cn −3 , ||L −Û TV T || F ≤ . In the above result, κ is the condition number of L. Later works, e.g., BIB006 , removed the dependence on condition number by studying a modification of simple alt-min that included a soft deflation scheme that relied on the following intuition: if κ is large, then for small ranks r, the spectrum of L must come in well-separated clusters with each cluster having small condition number. This idea is similar to that of the AltProj solution described earlier for RPCA BIB007 . Eliminating the partitioned sampling requirement of previous alt-min results. The main limitation of the above results for alt-min is the need for the partitioned sampling scheme. This does not use all samples at each iteration (inefficient) and, as explained in [71, I-B] , the sampling of Ω j is hard to implement in practice. The above limitation was removed in a the nice recent work of Sun and Luo BIB008 . However, this work brought back the dependence on condition number. Thus, their result is essentially similar to Theorem 5.13, but without the third condition (it uses all samples, i.e., Ω j = Ω), and hence, without a dependence of the sample complexity on . On the other hand, its disadvantage is that it does not specify the required number of iterations, T , that above results do, and thus does not provide a bound on computational complexity. Theorem 3.1 of BIB008 says the following: Consider either the alt-min or the gradient descent solutions with a minor modification of the spectral initialization idea described above (any of Algorithms 1-4 of BIB008 ). If 1) U is µ-incoherent, V is µ-incoherent, 2) the set Ω is generated uniformly at random with size m that satisfies m ≥ Cκ 2 µ 2 nr max(µ log n, κ 4 µ 2 r 6 ) then, with probability at least 1 − cn −3 , ||L −Û TV T || F ≤ (the number of iterations required, T , is not specified). No bad local minima or saddle points. Finally, all the above results rely on a carefully designed spectral initialization scheme followed by a particular iterative algorithm. Careful initialization is needed under the implicit assumption that, for the cost function to be minimized, there are many local minima and/or saddle points and the value of the cost function at the local minima does not equal the global minimum value. However, a series of very interesting recent works BIB010 , BIB011 has shown that this is, in fact, not the case: even though the cost function is non-convex, all its local minima are global minima and all its saddle points (points at which the gradient is zero but the Hessian matrix is indefinite) have Hessians with at least one negative eigenvalue. For the best result, see Theorems 1, 4 of BIB011 . In fact, there has been a series of recent works showing similar results also for matrix sensing, robust PCA, and phase retrieval BIB009 , BIB010 , BIB011 .
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Dynamic MC or Subspace Tracking with missing data <s> Subspace estimation plays an important role in a variety of modern signal processing applications. We present a new approach for tracking the signal subspace recursively. It is based on a novel interpretation of the signal subspace as the solution of a projection like unconstrained minimization problem. We show that recursive least squares techniques can be applied to solve this problem by making an appropriate projection approximation. The resulting algorithms have a computational complexity of O(nr) where n is the input vector dimension and r is the number of desired eigencomponents. Simulation results demonstrate that the tracking capability of these algorithms is similar to and in some cases more robust than the computationally expensive batch eigenvalue decomposition. Relations of the new algorithms to other subspace tracking methods and numerical issues are also discussed. > <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Dynamic MC or Subspace Tracking with missing data <s> Abstract Subspace tracking plays an important role in a variety of adaptive subspace methods. In this paper, we present a theoretical convergence analysis of two recently proposed projection approximation subspace tracking algorithms (PAST and PASTd). By invoking Ljung's ordinary differential equation approach, we derive a pair of coupled matrix differential equations, whose trajectories describe the asymptotic convergence behavior of the subspace tracking algorithms. We discuss properties of the matrix differential equations and determine their asymptotically stable equilibrium states and domain of attraction. It turns out that, under weak conditions, both PAST and PASTd globally converge to the desired signal subspace or signal eigenvectors and eigenvalues with probability one. Numerical examples are also included to illustrate the asymptotic convergence rate of the algorithms. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Dynamic MC or Subspace Tracking with missing data <s> This work presents GROUSE (Grassmanian Rank-One Update Subspace Estimation), an efficient online algorithm for tracking subspaces from highly incomplete observations. GROUSE requires only basic linear algebraic manipulations at each iteration, and each subspace update can be performed in linear time in the dimension of the subspace. The algorithm is derived by analyzing incremental gradient descent on the Grassmannian manifold of subspaces. With a slight modification, GROUSE can also be used as an online incremental algorithm for the matrix completion problem of imputing missing entries of a low-rank matrix. GROUSE performs exceptionally well in practice both in tracking subspaces and as an online algorithm for matrix completion. <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Dynamic MC or Subspace Tracking with missing data <s> Many real world datasets exhibit an embedding of low-dimensional structure in a high-dimensional manifold. Examples include images, videos and internet traffic data. It is of great significance to estimate and track the low-dimensional structure with small storage requirements and computational complexity when the data dimension is high. Therefore we consider the problem of reconstructing a data stream from a small subset of its entries, where the data is assumed to lie in a low-dimensional linear subspace, possibly corrupted by noise. We further consider tracking the change of the underlying subspace, which can be applied to applications such as video denoising, network monitoring and anomaly detection. Our setting can be viewed as a sequential low-rank matrix completion problem in which the subspace is learned in an online fashion. The proposed algorithm, dubbed Parallel Estimation and Tracking by REcursive Least Squares (PETRELS), first identifies the underlying low-dimensional subspace, and then reconstructs the missing entries via least-squares estimation if required. Subspace identification is performed via a recursive procedure for each row of the subspace matrix in parallel with discounting for previous observations. Numerical examples are provided for direction-of-arrival estimation and matrix completion, comparing PETRELS with state of the art batch algorithms. <s> BIB004 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Dynamic MC or Subspace Tracking with missing data <s> This work studies the recursive robust principal components' analysis(PCA) problem. Here, "robust" refers to robustness to both independent and correlated sparse outliers. If the outlier is the signal-of-interest, this problem can be interpreted as one of recursively recovering a time sequence of sparse vectors, St, in the presence of large but structured noise, Lt. The structure that we assume on Lt is that Lt is dense and lies in a low dimensional subspace that is either fixed or changes "slowly enough". A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background (Lt) from moving foreground objects (St) on-the-fly. To solve the above problem, we introduce a novel solution called Recursive Projected CS (ReProCS). Under mild assumptions, we show that, with high probability (w.h.p.), ReProCS can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value at all times. <s> BIB005 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Dynamic MC or Subspace Tracking with missing data <s> GROUSE (Grassmannian Rank-One Update Subspace Estimation) is an iterative algorithm for identifying a linear subspace of R^n from data consisting of partial observations of random vectors from that subspace. This paper examines local convergence properties of GROUSE, under assumptions on the randomness of the observed vectors, the randomness of the subset of elements observed at each iteration, and incoherence of the subspace with the coordinate directions. Convergence at an expected linear rate is demonstrated under certain assumptions. The case in which the full random vector is revealed at each iteration allows for much simpler analysis, and is also described. GROUSE is related to incremental SVD methods and to gradient projection algorithms in optimization. <s> BIB006 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Dynamic MC or Subspace Tracking with missing data <s> It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex. We tackle a particular instance of this scenario, where we seek the $d$-dimensional subspace spanned by a streaming data matrix. We apply the natural first order incremental gradient descent method, constraining the gradient method to the Grassmannian. In this paper, we propose an adaptive step size scheme that is greedy for the noiseless case, that maximizes the improvement of our metric of convergence at each data index $t$, and yields an expected improvement for the noisy case. We show that, with noise-free data, this method converges from any random initialization to the global minimum of the problem. For noisy data, we provide the expected convergence rate of the proposed algorithm per iteration. <s> BIB007 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> B. Dynamic MC or Subspace Tracking with missing data <s> This work investigates the problem of adaptive measurement design for online subspace estimation from compressive linear measurements. We study the previously proposed Grassmannian rank-one online subspace estimation (GROUSE) algorithm with adaptively designed compressive measurements. We propose an adaptive measurement scheme that biases the measurement vectors towards the current subspace estimate and prove a global convergence result for the resulting algorithm. Our experiments on synthetic data demonstrate the effectiveness of the adaptive measurement scheme over non-adaptive compressive random measurements. <s> BIB008
In the literature, there are three well-known algorithms for subspace tracking with missing data (equivalently, dynamic MC): PAST BIB001 , BIB002 , PETRELS BIB004 and GROUSE BIB003 , BIB006 , BIB007 , BIB008 . All are motivated by stochastic gradient descent to solve the PCA problem and the Oja algorithm. These and many others are described in detail in a review article on subspace . In this section, we briefly summarize the theoretical guarantees this problem: the only result for missing data is for GROUSE for the case of single unknown subspace. Moreover, the result is a partial guarantee (result makes assumptions on intermediate algorithm estimates). We give it next BIB006 . Theorem 5.14 (GROUSE for subspace tracking with missing data). Suppose the unknown subspace is fixed, denote it by P . 2 θ i (P (t) , P ) where θ i is the i-th largest principal angle between the two subspaces. Also, for a vector quantify its denseness. Assume that (i) the subspace is fixed and denoted by P ; (ii) P is µ-incoherent; (iii) the coefficients vector a t is independently from a standard Gaussian distribution, i.e., (a t ) i i.i.d. ∼ N (0, 1); (iv) the size of the set of observed entries at time t, Ω t , satisfies |Ω t | ≥ (64/3)r(log 2 n)µ log(20r); and the following assumptions on intermediate algorithm estimates hold: • the residual at each time, r t := t −P (t)P (t) t satisfies µ(r t ) ≤ min log n 0.045 log 10 C 1 rµ log(20r) , log 2 n 0.05 8 log 10 C 1 log(20r) with probability at least 1 −δ whereδ ≤ 0.6. Then, The above result is a partial guarantee and is hard to parse. Its denseness requirement on the residual r t is reminiscent of the denseness assumption on the currently un-updated subspace needed by the first (partial) guarantee for ReProCS from BIB005 . The only complete guarantee for subspace tracking exists for the case of no missing data BIB007 . It still assumes a single unknown subspace. We give this next. Theorem 5.15 (GROUSE-full). Given data vectors m t = t (no noise and no missing data). Suppose the unknown subspace is fixed, denote it by P . LetP (t) denote the estimate of the subspace at time t. Let t = r i=1 sin 2 θ i (P (t) , P ) as before. Assume that the initial estimate,P (0) is obtained by a random initialization, i.e., it is obtained by orthonormalizing a n×r i.i.d. standard normal matrix. Then, for any * > 0 and any δ, δ > 0, after ρ + r µ 0 log n + 2r log 1 * ρ iterations, with probability at least 1−δ −δ , GROUSE satisfies T ≤ * . Here, µ = 1 + log((1−δ )/C+r log(e/r) r log n for a constant C ≈ 1. In the noisy but no missing data setting, i.e., when m t = t + w t , the following can be claimed for GROUSE: where β 0 = 1 1+rσ 2 /n , β 1 = 1 − r/n.
Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> This work proposes a causal and recursive algorithm for solving the “robust” principal components' analysis problem. We primarily focus on robustness to correlated outliers. In recent work, we proposed a new way to look at this problem and showed how a key part of its solution strategy involves solving a noisy compressive sensing (CS) problem. However, if the support size of the outliers becomes too large, for a given dimension of the current principal components' space, then the number of “measurements” available for CS may become too small. In this work, we show how to address this issue by utilizing the correlation of the outliers to predict their support at the current time; and using this as “partial support knowledge” for solving Modified-CS instead of CS. <s> BIB001 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> In this work, we focus on the problem of recursively recovering a time sequence of sparse signals, with time-varying sparsity patterns, from highly undersampled measurements corrupted by very large but correlated noise. It is assumed that the noise is correlated enough to have an approximately low rank covariance matrix that is either constant, or changes slowly, with time. We show how our recently introduced Recursive Projected CS (ReProCS) and modified-ReProCS ideas can be used to solve this problem very effectively. To the best of our knowledge, except for the recent work of dense error correction via l 1 minimization, which can handle another kind of large but “structured” noise (the noise needs to be sparse), none of the other works in sparse recovery have studied the case of any other kind of large noise. <s> BIB002 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> Suppose we wish to recover a signal x in C^n from m intensity measurements of the form | |^2, i = 1, 2,..., m; that is, from data in which phase information is missing. We prove that if the vectors z_i are sampled independently and uniformly at random on the unit sphere, then the signal x can be recovered exactly (up to a global phase factor) by solving a convenient semidefinite program---a trace-norm minimization problem; this holds with large probability provided that m is on the order of n log n, and without any assumption about the signal whatsoever. This novel result demonstrates that in some instances, the combinatorial phase retrieval problem can be solved by convex programming techniques. Finally, we also prove that our methodology is robust vis a vis additive noise. <s> BIB003 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> It has recently been shown that only a small number of samples from a low-rank matrix are necessary to reconstruct the entire matrix. We bring this to bear on computer vision problems that utilize low-dimensional subspaces, demonstrating that subsampling can improve computation speed while still allowing for accurate subspace learning. We present GRASTA, Grassmannian Robust Adaptive Subspace Tracking Algorithm, an online algorithm for robust subspace estimation from randomly subsampled data. We consider the specific application of background and foreground separation in video, and we assess GRASTA on separation accuracy and computation time. In one benchmark video example [16], GRASTA achieves a separation rate of 46.3 frames per second, even when run in MATLAB on a personal laptop. <s> BIB004 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> Robust PCA methods are typically based on batch optimization and have to load all the samples into memory during optimization. This prevents them from efficiently processing big data. In this paper, we develop an Online Robust PCA (OR-PCA) that processes one sample per time instance and hence its memory cost is independent of the number of samples, significantly enhancing the computation and storage efficiency. The proposed OR-PCA is based on stochastic optimization of an equivalent reformulation of the batch RPCA. Indeed, we show that OR-PCA provides a sequence of subspace estimations converging to the optimum of its batch counterpart and hence is provably robust to sparse corruption. Moreover, OR-PCA can naturally be applied for tracking dynamic subspace. Comprehensive simulations on subspace recovering and tracking demonstrate the robustness and efficiency advantages of the OR-PCA over online PCA and batch RPCA methods. <s> BIB005 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> This paper designs and extensively evaluates an online algorithm, called practical recursive projected compressive sensing (Prac-ReProCS), for recovering a time sequence of sparse vectors St and a time sequence of dense vectors L ::: t ::: from their sum, M ::: t ::: : = S ::: t ::: + L ::: t ::: , when the L ::: t ::: 's lie in a slowly changing low-dimensional subspace of the full space. A key application where this problem occurs is in real-time video layering where the goal is to separate a video sequence into a slowly changing background sequence and a sparse foreground sequence that consists of one or more moving regions/objects on-the-fly. Prac-ReProCS is a practical modification of its theoretical counterpart which was analyzed in our recent work. Extension to the undersampled case is also developed. Extensive experimental comparisons demonstrating the advantage of the approach for both simulated and real videos, over existing batch and recursive methods, are shown. <s> BIB006 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> We study the problem of recursively reconstructing a time sequence of sparse vectors St from measurements of the form Mt = ASt +BLt where A and B are known measurement matrices, and Lt lies in a slowly changing low dimensional subspace. We assume that the signal of interest (St) is sparse, and has support which is correlated over time. We introduce a solution which we call Recursive Projected Modified Compressed Sensing (ReProMoCS), which exploits the correlated support change of St. We show that, under weaker assumptions than previous work, with high probability, ReProMoCS will exactly recover the support set of St and the reconstruction error of St is upper bounded by a small time-invariant value. A motivating application where the above problem occurs is in functional MRI imaging of the brain to detect regions that are “activated” in response to stimuli. In this case both measurement matrices are the same (i.e. A = B). The active region image constitutes the sparse vector St and this region changes slowly over time. The background brain image changes are global but the amount of change is very little and hence it can be well modeled as lying in a slowly changing low dimensional subspace, i.e. this constitutes Lt. <s> BIB007 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> We propose a new method for robust PCA -- the task of recovering a low-rank matrix from sparse corruptions that are of unknown value and support. Our method involves alternating between projecting appropriate residuals onto the set of low-rank matrices, and the set of sparse matrices; each projection is {\em non-convex} but easy to compute. In spite of this non-convexity, we establish exact recovery of the low-rank matrix, under the same conditions that are required by existing methods (which are based on convex optimization). For an $m \times n$ input matrix ($m \leq n)$, our method has a running time of $O(r^2mn)$ per iteration, and needs $O(\log(1/\epsilon))$ iterations to reach an accuracy of $\epsilon$. This is close to the running time of simple PCA via the power method, which requires $O(rmn)$ per iteration, and $O(\log(1/\epsilon))$ iterations. In contrast, existing methods for robust PCA, which are based on convex optimization, have $O(m^2n)$ complexity per iteration, and take $O(1/\epsilon)$ iterations, i.e., exponentially more iterations for the same accuracy. ::: Experiments on both synthetic and real data establishes the improved speed and accuracy of our method over existing convex implementations. <s> BIB008 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> We propose a factorized robust matrix completion (FRMC) algorithm with global motion compensation to solve the video background subtraction problem. The algorithm decomposes a sequence of video frames into the sum of a low rank background component and a sparse motion component. The algorithm alternates between the solution of each component following a Pareto curve trajectory for each subproblem. For videos with moving background, we utilize the motion vectors extracted from the coded video bitstream to compensate for the change in the camera perspective. Performance evaluations show that our approach is faster than state-of-the-art solvers and results in highly accurate motion segmentation. <s> BIB009 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> This work studies two interrelated problems - online robust PCA (RPCA) and online low-rank matrix completion (MC). In recent work by Cand\`{e}s et al., RPCA has been defined as a problem of separating a low-rank matrix (true data), $L:=[\ell_1, \ell_2, \dots \ell_{t}, \dots , \ell_{t_{\max}}]$ and a sparse matrix (outliers), $S:=[x_1, x_2, \dots x_{t}, \dots, x_{t_{\max}}]$ from their sum, $M:=L+S$. Our work uses this definition of RPCA. An important application where both these problems occur is in video analytics in trying to separate sparse foregrounds (e.g., moving objects) and slowly changing backgrounds. ::: While there has been a large amount of recent work on both developing and analyzing batch RPCA and batch MC algorithms, the online problem is largely open. In this work, we develop a practical modification of our recently proposed algorithm to solve both the online RPCA and online MC problems. The main contribution of this work is that we obtain correctness results for the proposed algorithms under mild assumptions. The assumptions that we need are: (a) a good estimate of the initial subspace is available (easy to obtain using a short sequence of background-only frames in video surveillance); (b) the $\ell_t$'s obey a `slow subspace change' assumption; (c) the basis vectors for the subspace from which $\ell_t$ is generated are dense (non-sparse); (d) the support of $x_t$ changes by at least a certain amount at least every so often; and (e) algorithm parameters are appropriately set <s> BIB010 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing shows you how robust subspace learning and tracking by decomposition into low-rank and sparse matrices provide a suitable framework for computer vision applications. Incorporating both existing and new ideas, the book conveniently gives you one-stop access to a number of different decompositions, algorithms, implementations, and benchmarking techniques. Divided into five parts, the book begins with an overall introduction to robust principal component analysis (PCA) via decomposition into low-rank and sparse matrices. The second part addresses robust matrix factorization/completion problems while the third part focuses on robust online subspace estimation, learning, and tracking. Covering applications in image and video processing, the fourth part discusses image analysis, image denoising, motion saliency detection, video coding, key frame extraction, and hyperspectral video processing. The final part presents resources and applications in background/foreground separation for video surveillance. With contributions from leading teams around the world, this handbook provides a complete overview of the concepts, theories, algorithms, and applications related to robust low-rank and sparse matrix decompositions. It is designed for researchers, developers, and graduate students in computer vision, image and video processing, real-time architecture, machine learning, and data mining. <s> BIB011 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> We develop two iterative algorithms for solving the low rank phase retrieval (LRPR) problem. LRPR refers to recovering a low-rank matrix $\X$ from magnitude-only (phaseless) measurements of random linear projections of its columns. Both methods consist of a spectral initialization step followed by an iterative algorithm to maximize the observed data likelihood. We obtain sample complexity bounds for our proposed initialization approach to provide a good approximation of the true $\X$. When the rank is low enough, these bounds are significantly lower than what existing single vector phase retrieval algorithms need. Via extensive experiments, we show that the same is also true for the proposed complete algorithms. <s> BIB012 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> Solving systems of quadratic equations is a central problem in machine learning and signal processing. One important example is phase retrieval, which aims to recover a signal from only magnitudes of its linear measurements. This paper focuses on the situation when the measurements are corrupted by arbitrary outliers, for which the recently developed non-convex gradient descent Wirtinger flow (WF) and truncated Wirtinger flow (TWF) algorithms likely fail. We develop a novel median-TWF algorithm that exploits robustness of sample median to resist arbitrary outliers in the initialization and the gradient update in each iteration. We show that such a non-convex algorithm provably recovers the signal from a near-optimal number of measurements composed of i.i.d. Gaussian entries, up to a logarithmic factor, even when a constant portion of the measurements are corrupted by arbitrary outliers. We further show that median-TWF is also robust when measurements are corrupted by both arbitrary outliers and bounded noise. Our analysis of performance guarantee is accomplished by development of non-trivial concentration measures of median-related quantities, which may be of independent interest. We further provide numerical experiments to demonstrate the effectiveness of the approach. <s> BIB013 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> Dynamic robust PCA refers to the dynamic (time-varying) extension of the robust PCA (RPCA) problem. It assumes that the true (uncorrupted) data lies in a low-dimensional subspace that can change with time, albeit slowly. The goal is to track this changing subspace over time in the presence of sparse outliers. This work provides the first guarantee for dynamic RPCA that holds under weakened standard RPCA assumptions, slow subspace change and two mild assumptions. We analyze a simple algorithm based on the Recursive Projected Compressive Sensing (ReProCS) framework. Our result is significant because (i) it removes the strong assumptions needed by the two previous complete guarantees for ReProCS-based algorithms; (ii) it shows that it is possible to achieve significantly improved outlier tolerance than all existing provable RPCA methods by exploiting slow subspace change and a lower bound on outlier magnitudes; and (iii) it proves that the proposed algorithm is online, fast, and memory-efficient. <s> BIB014 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> Robust PCA (RPCA) is the problem of separating a given data matrix into the sum of a sparse matrix and a low-rank matrix. Static RPCA is the RPCA problem in which the subspace from which the true data is generated remains fixed over time. Dynamic RPCA instead assumes that the subspace can change with time, although usually the changes are slow. We propose a Recursive Projected Compressed Sensing based algorithm called MERoP (Memory-Efficient Robust PCA) to solve the static RPCA problem. A simple extension of MERoP has been shown in our other work to also solve the dynamic RPCA problem. To the best of our knowledge, MERoP is the first online solution for RPCA that is provably correct under mild assumptions on input data and requires no assumption on intermediate algorithm estimates. Moreover, MERoP enjoys nearly-optimal memory complexity and is almost as fast as vanilla SVD. We corroborate our theoretical claims through extensive numerical experiments on both synthetic data and real videos. <s> BIB015 </s> Static and Dynamic Robust PCA and Matrix Completion: A Review <s> VI. EXPERIMENTAL COMPARISONS <s> In this work, we study the robust subspace tracking (RST) problem and obtain one of the first two provable guarantees for it. The goal of RST is to track sequentially arriving data vectors that lie in a slowly changing low-dimensional subspace, while being robust to corruption by additive sparse outliers. It can also be interpreted as a dynamic (time-varying) extension of robust PCA (RPCA), with the minor difference that RST also requires a short tracking delay. We develop a recursive projected compressive sensing algorithm that we call Nearly Optimal RST via ReProCS (ReProCS-NORST) because its tracking delay is nearly optimal. We prove that NORST solves both the RST and the dynamic RPCA problems under weakened standard RPCA assumptions, two simple extra assumptions (slow subspace change and most outlier magnitudes lower bounded), and a few minor assumptions. ::: Our guarantee shows that NORST enjoys a near optimal tracking delay of $O(r \log n \log(1/\epsilon))$. Its required delay between subspace change times is the same, and its memory complexity is $n$ times this value. Thus both these are also nearly optimal. Here $n$ is the ambient space dimension, $r$ is the subspaces' dimension, and $\epsilon$ is the tracking accuracy. NORST also has the best outlier tolerance compared with all previous RPCA or RST methods, both theoretically and empirically (including for real videos), without requiring any model on how the outlier support is generated. This is possible because of the extra assumptions it uses. <s> BIB016
All time comparisons are performed on a Desktop Computer with Intel R Xeon E3-1240 8-core CPU @ 3.50GHz and 32GB RAM. All experiments on synthetic data are averaged over 100 independent trials. Synthetic Data: Fixed Subspace. Our first experiment generates data exactly as suggested in the ORPCA paper BIB005 . The true data subspace is fixed. We generate the low rank matrix L = U V where the entries of U ∈ R n×r and 4,000 6,000 8,000 10,000 GRASTA (0.6) ORPCA (7.6) NORST (1.0) Offline-NORST (1.7) 2,000 4,000 6,000 8,000 10,000 t (d) indicate that the initialization used t train = 100, 200 samples. GRASTA-0 indicates that we used the default initialization (to zeros). Notice that ORPCA and ReProCS-NORST-200 are able to improve the subspace estimate using more time samples, while others fail. ReProCS-NORST-100 failed because the initial subspace error was one (too large). All versions of GRASTA fail too. (b): Illustrates the subspace error for the ORPCA-model, but with changing subspace. ORPCA and ReProCS-NORST are able to obtain good accuracy. (c): Illustrates the subspace error for outlier supports generated using Moving Object Model and (d): illustrates the error under the Bernoulli model. The values are plotted every kα − 1 time-frames. The time taken per frame in milliseconds (ms) for the Bernoulli model is shown in legend parentheses. All results are averaged over 100 independent trials. The key difference between the first two plots and the last two plots is that (i) in the first two plots, the initial error seen by ReProCS-NORST-200 is much higher (around 0.98); and (ii) outliers are generated to be uniformly distributed between -1000 and 1000 (and so many outliers are neither too large, nor too small). V ∈ R d×r are generated as i.i.d N (0, 1/d). Notice that in this case, P (t) := basis(U ). We used the Bernoulli model to generate the sparse outliers. This means that each entry of the n × d sparse outlier matrix S is nonzero with probability ρ x independent of all others. The nonzero entries are generated uniformly at random in the interval [−1000, 1000]. We used n = 400, d = 1000, r = 50, and ρ x = 0.001. The setting stated in BIB005 used ρ x = 0.01, but we noticed that in this case, ORPCA error saturated at 0.4. To show a case where ORPCA works well, we reduced ρ x to 0.001. We compare ReProCS, ORPCA and GRASTA. ReProCS-NORST given earlier in Algorithm 2 was implemented. ORPCA does not use any initialization, while ReProCS does. GRASTA has the option of providing an initial estimate or using the default initialization of zero. In this first experiment we tried both options. The initial subspace estimate for both ReProCS and GRASTA was computed using AltProj applied to the first t train frames. We experimented with two values of t train : t train = 100 and t train = 200. We label the corresponding algorithms ReProCS-100, ReProCS-200, GRASTA-100, and GRASTA-200. GRASTA with no initialization provided is labeled as GRASTA-0. The Monte Carlo averaged subspace recovery error SE(P (t) , P (t) ) versus time t plots are shown in Fig. 2 . As can be seen, ORPCA works well in this setting while all three versions of GRASTA fail completely. ReProCS-100 also fails. This is because when t train = 100, the initial subspace estimate computed using AltProj satisfies SE(P init , P 0 ) = 1. On the other hand, ReProCS-200 works as well as ORPCA because in this case, SE(P init , P 0 ) ≈ 0.98. We implement GRASTA and ORPCA using code downloaded from https://github.com/andrewssobral/lrslibrary. The regularization parameter for ORPCA was set as with λ 1 = 1/ √ n and λ 2 = 1/ √ d according to BIB005 . The ReProCS algorithm parameters are set as suggested in Algorithm 2: K = log(c/ε) = 8, α = Cr log n = 200, ω = 1 and ξ = 2/15 = 0.13, ω evals = 2ε 2 λ + = 0.00075. To address a comment of the reviewer, we also tried to generate data using the data generation code for GRASTA. However even in this case, the final subspace recovery error of GRASTA was around 0.8 even when the number of observed data vectors was d = 12000. We even tried changing its multiple tuning parameters, but were unable to find any useful setting. Further, the code documentation does not provide a good guide to tune the internal parameters for optimization in different scenarios and thus the results here are reported as is. This is possibly since the algorithm parameters on the website are tuned for the video application but the parameter set is not a good setting for synthetic data. Synthetic Data: Time-Varying Subspaces (a). In our second experiment, we assume that the subspace changes every so often and use t j to denote the j-th change time for j = 1, 2, . . . , J with t 0 := 1 and t J+1 := d. As explained earlier, this is necessary for identifiability of all the subspaces. In the first sub-part, we use the data generation model as described in the static subspace case with the outliers, outlier magnitudes being generated exactly the same way. For the low-rank part, we simulate the changing subspace as follows. For t ∈ [1, t 1 ] we generate U 1 as in the first experiment. For t ∈ (t 1 , t 2 ], we use U 2 = exp(γB)U 1 , where γ = 10 −3 and B is a skew-symmetric matrix, and for t ∈ (t 2 , d] we use U 3 = exp(γB)U 2 . The matrix V is generated exactly the same as in the first experiment. The results are shown in Fig. 2 . We note here that ORPCA and ReProCS-NORST provide a good performance and ORPCA is the best as the number of samples increases. All algorithms were implemented using the same parameters described in the first experiment. Here we used n = 400, d = 6000, r = 50 and ρ x = 0.001. From the previous experiment, we selected GRASTA-0 and ReProCS-200 since these provide best performance and the other algorithm parameters are unchanged. Synthetic Data: Time-Varying Subspaces(b). In our final two experiments, we again assume that the subspace changes every so often and use t j to denote the j-th change time, Thus, in this case, t = P (t) a t where P (t) is an n × r basis matrix with P (t) = P j for t ∈ [t j , t j+1 ), j = 0, 1, 2, . . . , J. We generated P 0 by orthonormalizing and n × r i.i.d. Gaussian matrix. For j > 1, the basis matrices P j were generated using the model as also done in BIB004 which involves a left-multiplying the basis matrix with a rotation matrix, i.e., where B j is a skew-Hermitian matrix which ensures that P j P j = I r and δ j controls the amount of subspace change. The matrices B 1 and B 2 are generated as B 1 = (B 1 −B 1 ) and B 2 = (B 2 −B 2 ) where the entries ofB 1 ,B 2 are generated independently from a standard normal distribution. To obtain the low-rank matrix L from this we generate the coefficients a t ∈ R r0 as independent zero-mean, bounded random variables. They are (a t ) i thus the condition number is f and we selected f = 50. We used the following parameters: n = 1000, d = 12000, J = 2, t 1 = 3000, t 2 = 8000, r = 30, δ 1 = 0.001, δ 2 = δ 1 . The sparse outlier matrix S was generated using two models: (A) Bernoulli model (commonly used model in all RPCA works) and (B) the moving object model BIB014 Model G.24] . This model was introduced in BIB010 , BIB014 as one way to generate data with a larger max-outlier-frac-row α than max-outlier-frac-col. It simulates a person pacing back and forth in a room. The nonzero entries of S were generated as uniformly at random from the interval [s min , x max ] with s min = 10 and x max = 20 (all independent). With both models we generated data to have fewer outliers in the first t train = 100 frames and more later. This was done to ensure that the batch initialization provides a good initial subspace estimate. With the Bernoulli model, we used ρ x = 0.01 for the first t train frames and ρ x = 0.3 for the subsequent frames. With the moving object model we used s/n = 0.01, b 0 = 0.01 for the first t train frames and s/n = 0.05 and b 0 = 0.3 for the subsequent frames. The subspace recovery error plot is shown in Fig. 2 (b) (Bernoulli outlier support) and (c) (Moving Object outlier support), while the average L − L F / L F is compared in Table II. In this table, we compare all RPCA and dynamic RPCA solutions (AltProj, RPCA-GD, ReProCS, ORPCA, GRASTA). We do not compare PCP since it is known to be very slow from all past literature BIB008 , BIB014 . This experiment shows that since the outlier fraction per row is quite large, the other techniques are not able to obtain meaningful estimates. It also shows that both ReProCS and its offline version are the fastest among all methods that work. They are slower than only ORPCA and GRASTA which never work in either of these experiments. We initialized ReProCS-NORST using AltProj applied to M [1,ttrain] with t train = 100. AltProj used the true value of r, 10 iterations and a threshold of 0.01. This, and the choice of δ 1 and δ 2 ensure that SE(P init , P 0 ) ≈ SE(P 1 , P 0 ) ≈ SE(P 2 , P 1 ) ≈ 0.01. The other algorithm parameters are set as mentioned in the algorithm i.e., K = log(c/ε) = 8, α = Cr log n = 300, ω = s min /2 = 5 and ξ = s min /15 = 0.67, ω evals = 2ε 2 λ + = 7.5 × 10 −4 . We implement the other algorithms using code downloaded from https://github.com/andrewssobral/lrslibrary. The regularization parameter for ORPCA was set as with λ 1 = 1/ √ n and λ 2 = 1/ √ d according to BIB005 . AltProj and RPCA-GD were implemented on the complete data matrix M . We must point out that we experimented with applying the batch methods for various sub-matrices of the data matrix, but the performance was not any better and thus we only report the results of the former method. The other known parameters, r for Alt-Proj, outlier-fraction for RPCA-GD, are set using the true values . For both the techniques we set the tolerance as 10 −6 and 100 iterations (as opposed to the default 50 iterations) to match that of ReProCS. Real Video Experiments. We show comparisons on two real videos in this article. For extensive and quantitative comparisons done on the CDnet database, see . We show background recovery results for the Lobby and Meeting Room (or Curtain) dataset in Fig 3. Here we implement both online and batch The meeting room video involves much more significant background video changes due to the moving curtains. As can be seen, in this case, GRASTA fails. Also, ReProCS-NORST (abbreviated to ReProCS in this figure) is the second fastest after GRASTA and the fastest among solutions that work well. algorithms in a similar manner and provide the comparison. Parameters set as r = 40, K = 3, α = 20, ξ t = Ψˆ t−1 2 for ReProCS. VII. CONCLUSIONS AND FUTURE DIRECTIONS The original or static RPCA problem as well as the MC problem have been extensively studied in the last decade. However robust subspace tracking (RST) has not received much attention until much more recently. The same is also true in terms of provably correct solutions for subspace tracking with missing data (or dynamic MC) as well. In BIB015 , BIB016 a simple and provably correct RST approach was obtained that works with near-optimal tracking delay under simple assumptions -weakened versions of standard RPCA assumptions, lower bound on outlier magnitudes, true data subspace either fixed or slowly changing, and a good initialization (obtained via few iterations of AltProj applied to the first Cr data samples). After initialization, it can tolerate a constant bound on maximum outlier fractions in any row of later mini-batches of the data matrix. This is better than what any other RPCA solution can tolerate. But, this is possible only because of the extra assumptions. As explained earlier, the only way to relax the outlier magnitudes lower bound is if one could bound the element-wise error of the Compressive Sensing step. Subspace Tracking: dynamic RPCA, MC, RMC, and undersampled RPCA. Consider dynamic RPCA or robust subspace tracking (RST). Two tasks for future work are (a) replace the projected CS / robust regression step which currently uses simple l 1 minimization by more powerful CS techniques such as those that exploit structured sparsity; and (b) replace the SVD or projected-SVD in the subspace update step by fully streaming (single pass) algorithms. Both have been attempted in the past, but without provable guarantees. In the algorithm developed and evaluated for videos in BIB006 , BIB001 , slow (or model-driven) support change of the foreground objects was exploited. The GRASTA approach BIB004 used a stochastic gradient descent approach called GROUSE for subspace update. Two more difficult open questions include: (a) provably dealing with moving cameras (or more generally with small group transformations applied to the observed data), and (b) being able to at least detect sudden subspace change while being robust to outliers. For the video application, this would occur due to sudden scene changes (camera being turned around for example). In the ReProCS approaches studied so far, a sudden subspace change would get confused as a very large support outlier where as in the subspace tracking with missing data approaches such as GROUSE or PETRELS, all guarantees are for the case of a fixed unknown subspace. Some heuristics for dealing with moving cameras include BIB011 , BIB009 . Two important extensions of the RST problem include dynamic robust matrix completion (RMC) or RST with missing data and undersampled RST. The latter finds applications in undersampled dynamic MRI. There is an algorithm and a partial guarantee for undersampled RST in BIB002 , BIB006 , BIB007 , but a complete correctness result still does not exist; and careful experimental evaluations on real dynamic MRI datasets are missing too. Dynamic RMC finds applications in recommendation system design when the factors governing user preferences can change over time, e.g., as new demographics of users get added, or as more content gets added. In the setting where the number of users is fixed but content gets added with time, we can let m t be the vector of ratings of content (e.g., movie) t by all users. This vector will have zeros (corresponding to missing entries) and outliers (due to typographical errors or users' laziness). If the content is fixed but the number of users are allowed to increase over time, m t can be the vector of ratings by the t-th user. Either of these cases can be dealt with by initially using AltProj on an initial batch dataset. As more users get added, new "directions" will get added to the factors' subspace. This can be detected and tracked using an RST solution. An open question is how to deal with the most practical setting when both the users and content is allowed to get added with time. Phaseless Robust PCA and Subspace Tracking. A completely open question is whether one can solve the phaseless robust PCA or S+LR problem. In applications, such as ptychography, sub-diffraction imaging or astronomy, one can only acquire magnitude-only measurements BIB003 . If the unknown signal or image sequence is well modeled as sparse + low-rank, can this modeling be exploited to recover it from under-sampled phaseless measurements? Two precursors -low rank phase retrieval BIB012 and phase retrieval for a single outlier-corrupted signal BIB013 -have been recently studied. (Robust) subspace clustering and its dynamic extension. An open question is how can robust and dynamic robust PCA ideas be successfully adapted to solve other more general related problems. One such problem is subspace clustering which involves clustering a given dataset into one of K different low-dimensional subspaces. This can be interpreted as a generalization of PCA which tries to represent a given dataset using a single low-dimensional subspace. Subspace clustering instead uses a union of subspaces to represent the true data, i.e, each data vector is assumed to be generated from one of K possible subspaces. An important question of interest is the following: given that subspace clusters have been computed for a given dataset, if more data vectors come in sequentially, how can one incrementally solve the clustering problem, i.e., either classify the new vector into one of the K subspaces, or decide that it belongs to a new subspace? Also, under what assumptions can one solve this problem if the data were also corrupted by additive sparse outliers? We should point out that RST should not be confused as a special case of robust subspace clustering, since, to our best knowledge, robust subspace clustering solutions do not deal with additive sparse outliers. They only deal with the case where an entire data vector is either an outlier or not. Moreover, all subspace clustering solutions require that the K subspaces be "different" enough, while RST requires the opposite.
A survey of formation control and motion planning of multiple unmanned vehicles <s> 2 <s> To detect obstacles during off-road autonomous navigation, unmanned ground vehicles (UGV's) must sense terrain geometry and composition (terrain type) under day, night, and low-visibility conditions. To sense terrain geometry, we have developed a real-time stereo vision system that uses a Datacube MV-200 and a 68040 CPU board to produce 256/spl times/240-pixel range images in about 0.6 seconds/frame. To sense terrain type, we used the same computing hardware with red and near infrared imagery to classify 256/spl times/240-pixel frames into vegetation and non-vegetation regions at a rate of five to ten frames/second. This paper reviews the rationale behind the choice of these sensors, describes their recent evolution and on-going development, and summarizes their use in demonstrations of autonomous UGV navigation over the past five years. <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> 2 <s> We propose a model independent coordination strategy for multi-agent formation control. The main theorem states that under a bounded tracking error assumption our method stabilizes the formation error. We illustrate the usefulness of the method by applying it to rigid body constrained motions, as well as to mobile manipulation. <s> BIB002 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> 2 <s> This paper addresses the problem of cooperative path planning for a fleet of unmanned aerial vehicles (UAVs). The paths are optimized to account for uncertainty/adversaries in the environment by modeling the probability of UAV loss. The approach extends prior work by coupling the failure probabilities for each UAV to the selected missions for all other UAVs. In order to maximize the expected mission score, this stochastic formulation designs coordination plans that optimally exploit the coupling effects of cooperation between UAVs to improve survival probabilities. This allocation is shown to recover real-world air operations planning strategies, and to provide significant improvements over approaches that do not correctly account for UAV attrition. The algorithm is implemented in an approximate decomposition approach that uses straight-line paths to estimate the time-of-flight and risk for each mission. The task allocation for the UAVs is then posed as a mixed-integer linear program that can be solved using CPLEX. <s> BIB003 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> 2 <s> As a distributed solution to multi-agent coordination, consensus or agreement problems have been studied extensively in the literature. This paper provides a survey of consensus problems in multi-agent cooperative control with the goal of promoting research in this area. Theoretical results regarding consensus seeking under both time-invariant and dynamically changing information exchange topologies are summarized. Applications of consensus protocols to multiagent coordination are investigated. Future research directions and open problems are also proposed. <s> BIB004 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> 2 <s> To celebrate the 40th Anniversary of the Oceanic Engineering Society (OES) at the MTS/IEEE OCEANS 2008 Conference in Quebec City a series of review papers were requested from OES technical committee chairs. In response to that request this paper provides a review of the field of unmanned surface vehicles (USVs) and autonomous surface craft (ASCs). The paper discusses the enabling technologies that have allowed USVs to emerge as a viable platform for marine operations as well as the application areas where they offer value. The paper tracks developments in technology from early systems developed by the author in 1993 through the latest developments and demonstration programs. The future outlook for USV technology is also described. <s> BIB005 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> 2 <s> Efficient maritime navigation through obstructions is still one of the many problems faced by mariners. ::: The increasing traffic densities and average cruise speed of ships also impede the collision avoidance decision ::: making process by reducing the time in which decisions have to be made. It seems logical that the ::: decision making process be computerised and automated as a step towards reducing the risk of collision. ::: Although some studies have focused on this area, the majority did not consider the collision regulations or ::: environmental conditions and many previously proposed methods were idealistic. This study develops a ::: motion planning algorithm that determines an optimal navigation path for ships in close range encounters ::: based on known and predicted traffic and environmental data, with emphasis on the adaptability ::: of the algorithm to optimised for different criteria or missions. The domain of interest is the 5 nautical ::: mile region around own-ship based on the effective range of most modern navigation radars and identification ::: devices. Several computational constraints have been incorporated into the algorithm and categorised ::: based on safety priority. Collision-free and conformity with collision regulations are the primary ::: constraints that have to be satisfied; followed by secondary or optional mission specific constraints e.g. ::: commensurate with environmental conditions or taking the shortest navigation path. Own-ship speed is ::: considered to be a dynamic property and a function of the engine setting, which is a variable modifiable ::: by the optimisation routine. The change in the ship’s momentum as a result of a turning manoeuvre is ::: also included in the model. A modified version of an evolutionary algorithm is adopted to perform the ::: optimisation, where the variables are spatial coordinates and the engine setting at the particular path segment. ::: The navigation path can be optimised for specific criteria by adjusting the weighting on the cost ::: functions that describe the properties of the navigation paths. <s> BIB006 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> 2 <s> A fundamental aspect of autonomous vehicle guidance is planning trajectories. Historically, two fields have contributed to trajectory or motion planning methods: robotics and dynamics and control. The former typically have a stronger focus on computational issues and real-time robot control, while the latter emphasize the dynamic behavior and more specific aspects of trajectory performance. Guidance for Unmanned Aerial Vehicles (UAVs), including fixed- and rotary-wing aircraft, involves significant differences from most traditionally defined mobile and manipulator robots. Qualities characteristic to UAVs include non-trivial dynamics, three-dimensional environments, disturbed operating conditions, and high levels of uncertainty in state knowledge. Otherwise, UAV guidance shares qualities with typical robotic motion planning problems, including partial knowledge of the environment and tasks that can range from basic goal interception, which can be precisely specified, to more general tasks like surveillance and reconnaissance, which are harder to specify. These basic planning problems involve continual interaction with the environment. The purpose of this paper is to provide an overview of existing motion planning algorithms while adding perspectives and practical examples from UAV guidance approaches. <s> BIB007 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> 2 <s> In recent years unmanned vehicles have grown in popularity, with an ever increasing number of applications in industry, the military and research within air, ground and marine domains. In particular, the challenges posed by unmanned marine vehicles in order to increase the level of autonomy include automatic obstacle avoidance and conformance with the Rules of the Road when navigating in the presence of other maritime traffic. The USV Master Plan which has been established for the US Navy outlines a list of objectives for improving autonomy in order to increase mission diversity and reduce the amount of supervisory intervention. This paper addresses the specific development needs based on notable research carried out to date, primarily with regard to navigation, guidance, control and motion planning. The integration of the International Regulations for Avoiding Collisions at Sea within the obstacle avoidance protocols seeks to prevent maritime accidents attributed to human error. The addition of these critical safety measures may be key to a future growth in demand for USVs, as they serve to pave the way for establishing legal policies for unmanned vessels. <s> BIB008 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> 2 <s> This paper provides a review on the strategies and methodologies developed in recent years for trajectory/path following, coordination and unified group behavior of a team of unmanned vehicles in terms of application and categorization. The unmanned vehicles being studied have a common mission to maintain group formation and reach their target destinations in either known or unknown environments. The ability for a group of vehicles to follow individual paths is the first critical step in achieving group coordination and originates from path following employing a single vehicle. Once this technique is refined then various algorithm constructs can be explored in order to create efficient and harmonious group coordination, which is based on their originality on whether they are employed in a centralized or decentralized system. In this paper, survey and analysis on the various multi-vehicle applications in formation operation and categorizations of existing works are provided based on over 140 published literatures since 1986. A few challenges and future works are also recognized. <s> BIB009 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> 2 <s> Motion planning is a fundamental research area in robotics. Sampling-based methods offer an efficient solution for what is otherwise a rather challenging dilemma of path planning. Consequently, these methods have been extended further away from basic robot planning into further difficult scenarios and diverse applications. A comprehensive survey of the growing body of work in sampling-based planning is given here. Simulations are executed to evaluate some of the proposed planners and highlight some of the implementation details that are often left unspecified. An emphasis is placed on contemporary research directions in this field. We address planners that tackle current issues in robotics. For instance, real-life kinodynamic planning, optimal planning, replanning in dynamic environments, and planning under uncertainty are discussed. The aim of this paper is to survey the state of the art in motion planning and to assess selected planners, examine implementation details and above all shed a light on the current challenges in motion planning and the promising approaches that will potentially overcome those problems. <s> BIB010 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> 2 <s> We present a survey of formation control of multi-agent systems. Focusing on the sensing capability and the interaction topology of agents, we categorize the existing results into position-, displacement-, and distance-based control. We then summarize problem formulations, discuss distinctions, and review recent results of the formation control schemes. Further we review some other results that do not fit into the categorization. <s> BIB011
Depending on its designed mode of operation, an unmanned vehicle can be categorised as an Unmanned Aerial Vehicle (UAV), an Unmanned Ground Vehicle (UGV), an Unmanned Surface Vehicle (USV) or an Autonomous Underwater Vehicle (AUV) BIB009 . For further clarity, UGV refers to a vehicle operating while in contact with the ground BIB001 while USV refers to an autonomous marine vehicle that navigates on the water surface BIB005 . A number of practical platforms for each of these autonomous vehicle types have already been built and deployed. When comparing the applications of these platforms, one of common limitations that has been noted is that they are typically small in size and low in capacity, hence only capable of conducting relatively simple missions. In addition, most of the present unmanned vehicle platforms have low levels of autonomy while some are remote-controlled or are only semi-autonomous. To help overcome or mitigate these problems, it is often more effective to deploy these relatively small vehicles as a fleet in formation (or multi-vehicle formation system) to carry out tasks since, when compared to a larger individual vehicle, a fleet is able to cover a wider mission area with improved system robustness, coordination and fault-tolerant capabilities. To better deploy multi-vehicle formation systems, extensive formation related studies have been carried out in recent decades with formation control being the most actively investigated area. The aim of formation control is to generate appropriate control commands to drive multiple vehicles to achieve the prescribed constraints on their own states BIB011 , and a large body of the research has focused on consensus based formation control, which utilises the inter-vehicle distance information to allow the formation to retain a certain shape while navigating. More recently, the concept of using flexible formation shapes for collision avoidance purposes has been proposed and studied in a number of different papers. However, the focus of these research efforts remains on generating the commands for low-level controllers with the absence of high-level decision making capability. In order to overcome this deficiency and thereby promote the utilisation of multivehicle formation systems in complex missions, another research area, i.e. cooperative motion planning, has become dominant in parallel with formation control. By taking into account information such as the mission start and end points and the environmental constraints, the aim of cooperative motion planning is to provide practical guidance information such as the optimised trajectories for the formation to benefit the coordination of multiple vehicles BIB003 . In addition, when performing the planning, apart from the costs that are routinely considered in conventional planning, such as the shortest distance cost, constraints specifically related to the formation itself, such as the required formation shape, also need to be considered to facilitate the formation control . Figure 1 provides a comparison of formation control and cooperative motion planning by listing the key factors that need to be considered when designing algorithms. For formation control, in addition to control stability and robustness, vehicle dynamic constraints are important when designing the controller BIB002 ; whereas for cooperative motion planning, safety distance from obstacles, total distance cost, computational time and trajectory smoothness are key costs when planning the path 10;11 . It should also be noted that, as presented in Figure 1 , the large overlap between these two research topics clearly indicates that formation control and cooperative motion planning share a number of key concepts, and hence they should be working interactively when being implemented in multi-vehicle formation systems. For example, when performing cooperative motion planning, the trajectory for each vehicle should be generated with consideration for the required formation shape so that the shape can be attained efficiently. At the same time, the formation control strategy should also be capable of evaluating the features of the generated trajectories and decides whether or not to rigorously follow each individual path or modify them sufficiently it to avoid collisions. Based upon the above discussions, in order to intelligently and securely operate a multi-vehicle formation system, the importance of the formation control and cooperative motion planning is evident. In fact, a large number of high quality survey papers BIB004 have investigated the formation control problem and pointed out several feasible control approaches including the leader-follower, virtual structure and behaviour-based methods. However, most of these papers only review mobile robots platforms and do not discuss the related technologies applied to unmanned vehicles, which have more complex motion constraints. Also, the absence of the reviewing of cooperative motion planning algorithms has also prevented these papers from proving a thorough vision on the development of multi-vehicle platforms. Therefore, the purpose of this paper is to bridge this gap and to provide a review of the different approaches to formation control and cooperative motion planning used by unmanned vehicles over recent decades and to provide a comparison. The key focus is placed upon the analysis of how the different approaches are used to achieve various formation behaviours, such as the formation forming, maintenance and variation. The advantages and disadvantages of each method are analysed to determine common shortcomings and to consider the development trends for future research. In addition, since this paper considers unmanned vehicles, which are usually deployed in practical environments and are required to avoid obstacles, collision avoidance has become an important criterion when evaluating different methodologies. Specific attention has been given to those studies that have developed evasive strategies that implement flexible and varying formation shapes. Such a strategy is able to provide efficient and effective collision avoidance performance and is therefore generally preferred for practical applications. At this juncture it needs to emphasise that motion planning is at times referred to as path planning, and these two topics are closely related. The subtle difference between them is that path planning focuses on a collision-free or safe path from start to goal configuration, disregarding dynamic properties, i.e. velocity and acceleration; whereas, motion planning is the superset of path planning, with additional dynamic properties taken into consideration BIB006 . As a result, path planning typically refers to the computation of robot position and orientation geometric specifications only while motion planning involves evaluation of linear and angular velocities, taking robot or vehicle dynamics into account. However, because the difference is relatively minor, in many review papers (such as Tam et al. BIB006 , Campell et al. BIB008 and Elbanhawi et al. BIB010 ), both terms have been used and share the same meaning. In this paper, a similar convention has been followed and both motion and path planning have not been particularly distinguished or compared. However, for reader seeking more in depth differentiation and comparison a paper that specifically discusses the motion planning problem, readers are referred to Goerzen et al. BIB007 . The organisation of this article is as follows. In Section 2, a general overview of the unmanned vehicles formation is presented. The historical development as well as the system architecture of unmanned vehicles formations are discussed. Section 3 and 4 review the formation control strategies and formation path planning algorithms respectively with comprehensive comparison and analysis. Section 5 gives the conclusion remarks.
A survey of formation control and motion planning of multiple unmanned vehicles <s> Unmanned vehicles formation <s> A new concept of an advanced robot system, ACTRESS (ACTor-based Robots and Equipments Synthetic System), is presented in this paper. ACTRESS is an autonomous and distributed robot system composed of multi robotic elements. Each element is provided with functions to make decisions with understanding the target of tasks, recognizing surrounding environments, acting, and managing its own conditions, and to communicate with any other components. In order to manage multiple elements to achieve any given task targets, the protocol for communication between elements is discussed for cooperative action between arbitrary elements. This paper deals with the conceptual design of ACTRESS, focusing on the methodology for synthesizing the autonomous and distributed system. Also based on an assumption that mobility is the indispensable function for advanced robot systems, an experimental system using micro mouse"'^ is developed as a primitive example of ACTRESS. <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Unmanned vehicles formation <s> This paper addresses the development of cooperative rendezvous and cooperative target classification agents in a hierarchical distributed control system for unmanned aerospace vehicles. For cooperative rendezvous, a Voronoi based polygonal path is generated to minimize exposure to radar. The rendezvous agent minimizes team exposure from Individual coordination functions while satisfying stringent timing constraints. For cooperative target classification, templates are developed, optimal trajectories are followed, and adjacent vehicles are assigned to view at complementary aspect angles. Views axe statistically combined to maximize the probability of correct target classification over various aspect angles. <s> BIB002 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Unmanned vehicles formation <s> In this paper, we propose hierarchical control architecture for a system that does border or perimeter patrol using unmanned air vehicles (AUV). By control architecture we mean a specific way of organizing the motion control and navigation functions performed by the UAV. It is convenient to organize the functions into hierarchical layers. This way, a complex design problem is partitioned into a number of more manageable subproblems that are addressed in separate layers. This paper discusses vehicle control requirements and maps them onto layered control architecture. The formalization of the hierarchy is accomplished in terms of the specific functions accomplished by each layer and of the interfaces between layers. The implementation of the layers is discussed and illustrative examples are provided. <s> BIB003 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Unmanned vehicles formation <s> Urban Search And Rescue is a growing area of robotic research. The RoboCup Federation has recognized this, and has created the new Virtual Robots competition to complement its existing physical robot and agent competitions. In order to successfully compete in this competition, teams need to field multi-robot solutions that cooperatively explore and map an environment while searching for victims. This paper presents the results of the first annual RoboCup Rescue Virtual competition. It provides details on the metrics used to judge the contestants as well as summaries of the algorithms used by the top four teams. This allows readers to compare and contrast these effective approaches. Furthermore, the simulation engine itself is examined and real-world validation results on the engine and algorithms are offered. <s> BIB004 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Unmanned vehicles formation <s> This paper presents the multirobot team RIMRES (Reconfigurable Integrated Multirobot Exploration System), which is comprised of a wheeled rover, a legged scout, and several immobile payload items. The heterogeneous systems are employed to demonstrate the feasibility of reconfigurable and modular systems for lunar polar crater exploration missions. All systems have been designed with a common electromechanical interface, allowing to tightly interconnect all these systems to a single system and also to form new electromechanical units. With the different strengths of the respective subsystems, a robust and flexible overall multirobot system is built up to tackle the, to some extent, contradictory requirements for an exploration mission in a crater environment. In RIMRES, the capability for reconfiguration is explicitly taken into account in the design phase of the system, leading to a high degree of flexibility for restructuring the overall multirobot system. To enable the systems' capabilities, the same distributed control software architecture is applied to rover, scout, and payload items, allowing for semiautonomous cooperative actions as well as full manual control by a mission operator. For validation purposes, the authors present the results of two critical parts of the aspired mission, the deployment of a payload and the autonomous docking procedure between the legged scout robot and the wheeled rover. This allows us to illustrate the feasibility of complex, cooperative, and autonomous reconfiguration maneuvers with the developed reconfigurable team of robots. <s> BIB005 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Unmanned vehicles formation <s> The use of cooperative multirobot teams in urban search and rescue USAR environments is a challenging yet promising research area. For multirobot teams working in USAR missions, the objective is to have the rescue robots work effectively together to coordinate task allocation and task execution between different team members in order to minimize the overall exploration time needed to search disaster scenes and to find as many victims as possible. This paper presents the development of a multirobot cooperative learning approach for a hierarchical reinforcement learning HRL based semiautonomous control architecture in order to enable a robot team to learn cooperatively to explore and identify victims in cluttered USAR scenes. The proposed cooperative learning approach allows effective task allocation among the multirobot team and efficient execution of the allocated tasks in order to improve the overall team performance. Human intervention is requested by the robots when it is determined that they cannot effectively execute an allocated task autonomously. Thus, the robot team is able to make cooperative decisions regarding task allocation between different team members robots and human operators and to share experiences on execution of the allocated tasks. Extensive results verify the effectiveness of the proposed HRL-based methodology for multi-robot cooperative exploration and victim identification in USAR-like scenes. <s> BIB006 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Unmanned vehicles formation <s> This paper describes the development of a robot prototype for intervention, sampling, and situation awareness in CBRN chemical, biological, radiological, and nuclear missions. It outlines the mission requirements, design specifications, the solutions that were developed and integrated, and the final tests done. The solution addresses one of the most important mission requirements in CBRN scenarios: the capability to decontaminate the robot once it has been used in real missions. As microdoses of CBRN contaminants are sufficient to cause significant damage to human beings, prevention of robot contamination is always of top priority. If there is a potential danger of real contamination, it can only be removed by effective decontamination. The way to deal with this problem imposes significant design conditions; the proposed design allows an easy and fast decontamination of the robot. The work presents a new way to approach this kind of robot, based on modular component architecture over a robot operating system that permits the attachment and detachment of robot components via unique electromechanical interfaces. The resulting modular robot introduces an innovative kinematic solution that can be dynamically configured for the different mission requirements. <s> BIB007
The concept of formation is inspired by natural animal behaviours such as birds flocking or fish schooling, where a number of animals adopt certain formations to enhance the survival of the individuals within a group strategy. By mimicking animal formation behaviour, groups of unmanned vehicles can be deployed in formation to accomplish complex tasks and improve the level of system autonomy BIB002 . In the 1980s, multirobot formation systems had become a pioneering research field. Typical work included Fukuda's reconfigurable robot system, where the shape of a robot formation can be adjusted depending on task requirement , and the ACTor-based robots and equipments synthetic system (ACTRESS), which is a system architecture allowing multiple robots to cooperatively accomplish tasks, developed by The Institute of Physical and Chemical Research, Japan BIB001 . Then, as the technology became more mature, the concept, developed from the multirobot systems, paved the way to the utilisation of multiple unmanned vehicle platforms in real-world applications. One of the crucial applications is the rescue missions carried out by UGV formations in disaster areas to minimise exploration time and reduce the risk of further casualties BIB004 BIB006 BIB007 . Similarly, a number of efforts have been put into the deployments including the area mapping 25;26 and border patrol and surveillance BIB003 . In addition, some highly task-oriented missions make use of multiple unmanned vehicles in special cases such as the lunar polar crater exploration missions conducted using a wheeled UGV, a legged scout and several immobile payload items BIB005 . It is important to mention that an equivalent scale of the deployment of USV formations has not been seen in recent decades. However, it does not affect the impact brought by using the multi-USVs in accomplishing maritime activities in the future. In the report published by the U.S. Department of Defence (DoD), the importance of the collaboration between multiple manned vessels and USVs has been addressed. The primary aim is to extend the hydrographic area where human operations cannot reach 1 . Figure 2a shows an example of how manned and unmanned vessels perform a sea mapping operation. The manned surface vehicle in the middle is acting as the leader vehicle to guide the two USVs to conduct the mission. Compared with single vessel operation, the dimensions of the area being explored are significantly increased. In fact, due to the nature of surface operations of USVs, more important roles are going to be played by the USV in large scale cross-platform cooperation acting across different unmanned vehicles. One of the potential utilisations is the cooperation of USVs with other unmanned vehicles to form an unmanned system network (shown in Figure 2b ). The USV is unique in the sense that it is able to communicate with both above and under water vehicles. In the cooperative formation deployment of multiple unmanned vehicles, the USV can work as an interchange station such that the real-time information is gathered by one USV and distributed to other vehicles to improve communication efficiency .
A survey of formation control and motion planning of multiple unmanned vehicles <s> System architecture of multi-vehicle formation <s> Despite more than a decade of experimental work in multi-robot systems, important theoretical aspects of multi-robot coordination mechanisms have, to date, been largely untreated. To address this issue, we focus on the problem of multi-robot task allocation (MRTA). Most work on MRTA has been ad hoc and empirical, with many coordination architectures having been proposed and validated in a proof-of-concept fashion, but infrequently analyzed. With the goal of bringing objective grounding to this important area of research, we present a formal study of MRTA problems. A domain-independent taxonomy of MRTA problems is given, and it is shown how many such problems can be viewed as instances of other, well-studied, optimization problems. We demonstrate how relevant theory from operations research and combinatorial optimization can be used for analysis and greater understanding of existing approaches to task allocation, and show how the same theory can be used in the synthesis of new approaches. <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> System architecture of multi-vehicle formation <s> For a 3-D underwater workspace with a variable ocean current, an integrated multiple autonomous underwater vehicle (AUV) dynamic task assignment and path planning algorithm is proposed by combing the improved self-organizing map (SOM) neural network and a novel velocity synthesis approach. The goal is to control a team of AUVs to reach all appointed target locations for only one time on the premise of workload balance and energy sufficiency while guaranteeing the least total and individual consumption in the presence of the variable ocean current. First, the SOM neuron network is developed to assign a team of AUVs to achieve multiple target locations in 3-D ocean environment. The working process involves special definition of the initial neural weights of the SOM network, the rule to select the winner, the computation of the neighborhood function, and the method to update weights. Then, the velocity synthesis approach is applied to plan the shortest path for each AUV to visit the corresponding target in a dynamic environment subject to the ocean current being variable and targets being movable. Lastly, to demonstrate the effectiveness of the proposed approach, simulation results are given in this paper. <s> BIB002 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> System architecture of multi-vehicle formation <s> Abstract Time-varying formation control problems for unmanned aerial vehicle (UAV) swarm systems with switching interaction topologies are studied. Necessary and sufficient conditions for UAV swarm systems with switching interaction topologies to achieve predefined time-varying formations are proposed. Based on the common Lyapunov functional approach and algebraic Riccati equation technique, an approach to design the formation protocol is presented. An explicit expression of the formation reference function is derived to describe the macroscopic movement of the whole UAV formation. A quadrotor formation platform consisting of four quadrotors is introduced. Outdoor experiments are performed to demonstrate the effectiveness of the theoretical results. <s> BIB003 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> System architecture of multi-vehicle formation <s> Multi-robot systems (MRS) are a group of robots that are designed aiming to perform some collective behavior. By this collective behavior, some goals that are impossible for a single robot to achieve become feasible and attainable. There are several foreseen benefits of MRS compared to single robot systems such as the increased ability to resolve task complexity, increasing performance, reliability and simplicity in design. These benefits have attracted many researchers from academia and industry to investigate how to design and develop robust versatile MRS by solving a number of challenging problems such as complex task allocation, group formation, cooperative object detection and tracking, communication relaying and self-organization to name just a few. One of the most challenging problems of MRS is how to optimally assign a set of robots to a set of tasks in such a way that optimizes the overall system performance subject to a set of constraints. This problem is known as Multi-robot Task Allocation (MRTA) problem. MRTA is a complex problem especially when it comes to heterogeneous unreliable robots equipped with different capabilities that are required to perform various tasks with different requirements and constraints in an optimal way. This chapter provides a comprehensive review on challenging aspects of MRTA problem, recent approaches to tackle this problem and the future directions. <s> BIB004 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> System architecture of multi-vehicle formation <s> Most of the robotic systems are designed to move and perform tasks in a variety of environments. Some of these environments are controllable and well-defined, and the tasks to be performed are generally everyday ones. However, exploration missions also enclose hard constraints such as driving vehicles to many locations in a surface of several kilometres to collect and/or analyse interesting samples. Therefore, a critical aspect for the mission is to optimally (or sub-optimally) plan the path that a robot should follow while performing scientific tasks. In this paper, we present up2ta, a new AI planner that interleaves path-planning and task-planning for mobile robotics applications. The planner is the result of integrating a modified PDDL planner with a path-planning algorithm, combining domain-independent heuristics and a domain-specific heuristic for path-planning. Then, up2ta can exploit capabilities of both planners to generate shorter paths while performing scientific tasks in an efficient ordered way. The planner has been tested in two domains: an exploration mission consisting of pictures acquisition, and a more challenging one that includes samples delivering. Also, up2ta has been integrated and tested in a real robotic platform for both domains. A planner for mobile robotics applications is proposed.Integrating task-planning and path-planning provides several advantages.Using specific and domain independent heuristics improves the solutions generated. <s> BIB005 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> System architecture of multi-vehicle formation <s> An Autonomous Underwater Vehicle (AUV) needs to acquire a certain degree of autonomy for any particular underwater mission to fulfill the mission objectives successfully and ensure its safety in all stages of the mission in a large scale operating filed. In this paper, a novel combinatorial conflict-free-task assignment strategy consisting an interactive engagement of a local path planner and an adaptive global route planner, is introduced. The method is established upon the heuristic search potency of the Particle Swarm Optimisation (PSO) algorithm to address the discrete nature of routing-task assignment approach and the complexity of NP-hard path planning problem. The proposed hybrid method is highly efficient for having a reactive guidance framework that guarantees successful completion of missions specifically in cluttered environments. To examine the performance of the method in a context of mission productivity, mission time management and vehicle safety, a series of simulation studies are undertaken. The results of simulations declare that the proposed method is reliable and robust, particularly in dealing with uncertainties, and it can significantly enhance the level of vehicle's autonomy by relying on its reactive nature and capability of providing fast feasible solutions. <s> BIB006 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> System architecture of multi-vehicle formation <s> This paper provides a novel distributed predictive controller with guaranteed stability to maintain formation between mobile robots during their motion along a desired path and assure no collisions with obstacles or other adjacent robots, while data is exchanged among them via a packet-delaying communication network. First, the closed-loop system dynamics are described as a delayed differential equation with tunable parameters. Then, these adjustable gains are determined synchronously in each agent by the proposed predictive strategy such that a desirable formation is achieved. The efficiency and applicability of the suggested scheme are demonstrated by simulation results. <s> BIB007 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> System architecture of multi-vehicle formation <s> The cooperative control of marine vehicles finds wide applications in many marine missions and tasks. This paper investigates the receding horizon formation tracking control problem of a fleet of underactuated autonomous underwater vehicles (AUVs), in which the follower AUVs are required to track the leader with prescribed formation pattern, and the control inputs of the follower AUVs are subject to practical constraints. An auxiliary stabilizable control law is first designed, based on which a novel optimization problem is proposed and a new receding horizon control (RHC) algorithm is designed to generate control inputs. The theoretical feasibility conditions of the RHC-based tracking algorithm and the stability conditions of the closed-loop systems are provided. Simulation studies are conducted, and the simulation results verify the effectiveness of the proposed algorithm and theoretical results. <s> BIB008 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> System architecture of multi-vehicle formation <s> An integrated biologically inspired self-organizing map (SOM) algorithm is proposed for task assignment and path planning of an autonomous underwater vehicle (AUV) system in 3-D underwater environments with obstacle avoidance. The algorithm embeds the biologically inspired neural network (BINN) into the SOM neural networks. The task assignment and path planning aim to arrange a team of AUVs to visit all appointed target locations, while assuring obstacle avoidance without speed jump. The SOM neuron network is developed to assign a team of AUVs to achieve multiple target locations in underwater environments. Then, in order to avoid obstacles and speed jump for each AUV that visits the corresponding target location, the BINN is utilized to update weights of the winner of SOM, and achieve AUVs path planning and effective navigation. The effectiveness of the proposed hybrid model is validated by simulation studies. <s> BIB009
A generic hierarchical architecture formation system has been proposed by Liu and Bucknall 30 as displayed in Figure 3 . The structure consists of three layers, i.e. the Task Management Layer, the Path Planning Layer and the Task Execution Layer 31 . The Task Management Layer allocates missions to individual vehicles based upon the criteria of maximum overall performance and minimum mission time . A mission can be generally defined as a set of waypoints including mission start point and end point. In Gerkey et al. BIB001 and Khamis et al. BIB004 , comprehensive reviews regarding the multirobot task allocation have been provided with dominant methodologies being listed. It also should be noted that due to the popularity of the utilisation of neural networks in solving robotics related problems, in recent years there has been a large amount of work using artificial neural network (ANN) such as the self-organising map (SOM) to address multi-task allocation for unmanned vehicles. For example, Zhu et al. BIB002 proposed to use the SOM to plan tasks for multi-AUV systems and develop a velocity synthesis method for path planning according to the assigned tasks. Faigl and Hollinger 36 also applied the SOM for AUV systems and have specifically investigated SOM application in data collection missions. Liu and Bucknall 37 expanded utilisation of the SOM to USV platforms and have integrated the potential field into the SOM to achieve collision avoidance functionality. According to mission requirements, the second layer, i.e. the Path Planning Layer, plans feasible trajectories for the formation. This layer is comprised of three sub modules: the real-time trajectory modification module, the data acquisition module and the cooperative path planning module. Among them, the cooperative path planning module is the core of the system and determines the overall optimised path for each vehicle. However, since a number of uncertainties may occur along the trajectory in practical applications, the real-time trajectory modification module is added to the system such that the formation is able to deal with emergency situations such as a suddenly emerged obstacle. A good example of integrating path planning capability with the task-planning requirement can be found at Munoz et al. BIB005 , where a unified framework has been proposed for exploration missions. Also, in Mahmoudzadeh et al. BIB006 , a novel combinatorial conflict-free task assignment and path planning strategy has been proposed for largescale underwater missions and based upon such a strategy, Zhu et al. BIB009 incorporated a biologically inspired neural network (BINN) into the task-allocation algorithm to address the dynamics constraints of the vehicles when generating the path. Generated paths will then be passed down to the Task Execution Layer. This layer has the direct connection with the propulsion system of the unmanned vehicle and generates the control laws. In order to improve system performance, real-time information, i.e. vehicle velocity and position, is fed back to the upper layer to modify the trajectory in the near future, which generates a closed control loop. Present dominant control strategies include that Dong et al. BIB003 developed an approach adopting a switching interaction topologies to solve the time-varying formation control problem for UAVs. Yamchi et al. BIB007 proposed a distributed predictive controller which helps improve system stability as well as avoid collisions en route for mobile robots. Li et al. BIB008 improved the common receding horizon formation control to achieve stabilised tracking performance for AUVs.
A survey of formation control and motion planning of multiple unmanned vehicles <s> Leader-follower formation control <s> The problem of deriving navigation strategies for a fleet of autonomous mobile robots moving in formation is considered. Here each robot is represented by a particle with a spherical effective spatial domain and a specified cone of visibility. The global motion of each robot in the world space is described by the equations of motion of the robot's center of mass. First, methods for formation generation are discussed. Then, simple navigation strategies for robots moving in formation are derived. A sufficient condition for the stability of a desired formati attern for a fleet of robots each equipped with the gation strategy based on nearest neighbor tracking is developed. The dynamic behavior of robot fleets consisting of three or more ro- bots moving in formation in a plane is studied by means of computer simulation. <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Leader-follower formation control <s> In this paper, we investigate the problem of inter-robot collision avoidance in multiple mobile robot formation control. Two methodologies are utilized, namely Virtual Robot tracking by [Jongusuk and Mita, 2001] and l-l control by [Desai et al., 1998] to establish formation and avoid collision among robots. We point out that the framework in Virtual Robot tracking is potentially subject to collision among robots. This drawback is overcome in our design by incorporating a different reactive scheme in the incident possibility of collision. To prove the advantages of our framework, we demonstrate in simulation the case of three robots moving in formation and avoiding inter-robot collisions. <s> BIB002 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Leader-follower formation control <s> The paper deals with leader-follower formations of nonholonomic mobile robots, introducing a formation control strategy alternative to those existing in the literature. Robots' control inputs are forced to satisfy suitable constraints that restrict the set of leader possible paths and admissible positions of the follower with respect to the leader. A peculiar characteristic of the proposed strategy is that the follower position is not rigidly fixed with respect to the leader but varies in proper circle arcs centered in the leader reference frame. <s> BIB003 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Leader-follower formation control <s> In the future, it may be possible to employ large numbers of autonomous marine vehicles to perform tedious and dangerous tasks, such as minesweeping. Hypothetically, groups of vehicles may leverage their numbers by cooperating. A fundamental form of cooperation is to perform tasks while maintaining a geometric formation. The formation behavior can then enable other cooperative behaviors. In this paper, we describe a leader-follower formation-flying control algorithm. This algorithm can be applied to one-, two-, and three dimensional formations, and contains a degree of built-in robustness. Simulations and experiments are described that characterize the performance of the formation control algorithm. The experiments utilized surface craft that were equipped with an acoustic navigation and communication system, representative of the technologies that constrain the operation of underwater autonomous vehicles. The simulations likewise included the discrete-time nature of the communication and navigation. <s> BIB004 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Leader-follower formation control <s> In this paper, we presented a review on the current control issues and strategies on a group of unmanned autonomous vehicles/robots formation. Formation control has broad applications and becomes an active research topic in the recent years. In this paper, we attempt to review the key issues in formation control with a focus on the main control strategies for formation control under different kinds of scenarios. Then, we point out some important open questions and the possible future research directions on formation control. This paper contributes with a new and interesting consideration on formation control and its application in distributed parameter systems. We pointed out that formation control should be classified as formation regulation control and formation tracking control, similar to regulator and tracker in conventional control. <s> BIB005 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Leader-follower formation control <s> In recent years, research on formation control has received a lot of attention in robotics. This paper presents a comprehensive review of formation control for multiple mobile robots under different scenarios. Proposals for formation control reviewed include a behavior-based approach, an artificial potential field approach, a leader-follower approach, a virtual structural approach, a cyclic approach, a model predictive control approach, and a distributed control approach. Problems identified in working with each model's theoretical and practical properties are discussed from the perspectives of generality, stability, robustness and safety. <s> BIB006 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Leader-follower formation control <s> This paper presents vision-based control strategies for decentralized stabilization of unmanned vehicle formations. Three leader–follower formation control algorithms, which ensure asymptotic co-ordinated motion, are described and compared. The first algorithm is a full state feedback nonlinear controller that requires full knowledge of the leader's velocities and accelerations. The second algorithm is a robust state feedback nonlinear controller that requires knowledge of the rate of change of the relative position error. Finally, the third algorithm is an output feedback approach that uses a high-gain observer to estimate the derivative of the unmanned vehicles' relative position. Thus, this algorithm only requires knowledge of the leader–follower relative distance and bearing angle. Both data are computed using measurements from a single camera, eliminating sensitivity to information flow between vehicles. Lyapunov's stability theory-based analysis and numerical simulations in a realistic 3D environment show the stability properties of the control methodologies. Copyright © 2007 John Wiley & Sons, Ltd. <s> BIB007 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Leader-follower formation control <s> This paper is concerned with the formation control problem of multiple underactuated surface vessels moving in a leader-follower formation. The formation is achieved by the follower to track a virtual target defined relative to the leader. A robust adaptive target tracking law is proposed by using neural network and backstepping techniques. The advantage of the proposed control scheme is that the uncertain nonlinear dynamics caused by Coriolis/centripetal forces, nonlinear damping, unmodeled hydrodynamics and disturbances from the environment can be compensated by on line learning. Based on Lyapunov analysis, the proposed controller guarantees the tracking errors converge to a small neighborhood of the origin. Simulation results demonstrate the effectiveness of the control strategy. <s> BIB008 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Leader-follower formation control <s> This paper investigates the leader-follower formation control problem for nonholonomic mobile robots based on a bioinspired neurodynamics based approach. The trajectory tracking control for a single nonholonomic mobile robot is extended to the formation control for multiple nonholonomic mobile robots based on the backstepping technique, in which the follower can track its real-time leader by the proposed kinematic controller. An auxiliary angular velocity control law is proposed to guarantee the global asymptotic stability of the followers and to further guarantee the local asymptotic stability of the entire formation. Also a bioinspired neurodynamics based approach is further developed to solve the impractical velocity jumps problem. The rigorous proofs are given by using Lyapunov theory. Simulations are also given to verify the effectiveness of the theoretical results. <s> BIB009
In the leader-follower control approach, one vehicle is regarded as the group leader with full access to the overall navigation information and works as the reference vehicle in the formation. In some cases where system robustness is critical, a virtual leader can be assigned to replace the actual vehicle in the formation BIB005 . Apart from the leader vehicle, other vehicles in the formation are viewed as followers. Followers operate under the guidance of the leader with the primary aim being retention of the formation shape by maintaining the desired distance from and pose angle to the leader. Figure 6 illustrates the leader-follower scheme designed by Wang BIB001 . L ij and Ψ ij are the actual distance and angle between leader and follower vehicle while L d ij and Ψ d ij are the desired distance and angle. The control task is to determine the linear velocity and angular velocity for follower vehicle to eliminate the error value of distance and angle between leader and follower such that: Normally, two types of controllers are employed to design the control law: 1) l-l controller and 2) l-φ controller. The first controller focuses on the relative positions between each vehicle in the formation; while the second one deals with the distance and angle between leader and follower BIB006 . The leader-follower approach described here is only feasible for formation control in open space, as it only provides a solution to the Type 1 and Type 2 formation maintenance problems. Desai et al. BIB002 improved the leader-follower approach by adding collision avoidance capability to enhance control of the formation in a cluttered environment (by solving the Type 3 maintenance problem). The obstacle was avoided by letting the vehicle maintain a new desired distance, which is the distance between the vehicle and the obstacle. When the formation was avoiding the obstacle, the formation shape could be adaptively changed as shown in Figure 7 , and returned to the desired shape after the risk of collision was averted. The work of Wang BIB001 and Desai et al. BIB002 has become the standard approach when the leader-follower approach is applied to unmanned vehicle formation platforms. Adequate modifications are made according to specific needs provided by different platforms. One of the problems of the implementation is the vehicle's bounded control inputs or constrained inputs. The inputs are normally subjected to a control boundary meaning the control system requires more reaction time, so the stability of the system could also be affected. BIB003 transformed the physical constraints on the velocity into a geometrical representation. As shown in Figure 8 , the follower's stable point was expanded to an arc instead of a point, which increases the system stability margin. Peng et al. BIB009 observed that impractically large control torque inputs could occur in conventional leader-follower controllers, which could lead to unstable performance. A bio-inspired neuro-dynamics based controller was developed to specifically reduce the required linear and angular velocities in initial state and subsequently reduced the force and torque inputs. Another issue is system communication. When the leader-follower is implemented, robust communication throughout the formation needs to be assured such that leader and followers are able to exchange their pose information accurately. Unfortunately, such a communication channel is hardly available in practical applications. Edwards et al. BIB004 studied the malfunction problem brought about by loss of communication. Orqueda et al. BIB007 proposed a monocular vision system to assist with recording the relative motion between leader and follower. A high gain observer is used to estimate the derivative of leader to follower distance and bearing angle. Peng et al. BIB008 investigated uncertainties associated with marine surface vehicles such as un-modeled hydrodynamics and disturbances from the environment when controlling the formation. An adaptive control law based upon neural networks and backstepping techniques was designed to compensate for uncertainties through an online learning scheme.
A survey of formation control and motion planning of multiple unmanned vehicles <s> Virtual structure formation control <s> A key problem in cooperative robotics is the maintenance of a geometric configuration during movement. To address this problem, the concept of a virtual structure is introduced. Control methods are developed to force an ensemble of robots to behave as if they were particles embedded in a rigid structure. The method was tested both using simulation and experimentation with a set of three robots. Results are presented which demonstrate that this approach is capable of achieving high precision movement which is fault tolerant and exhibits graceful degradation of performance. In addition, this algorithm does not require leader selection as in other cooperative robotic strategies. Finally, the method is highly flexible in the kinds of geometric formations that can be maintained. <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Virtual structure formation control <s> A key problem in cooperative robotics is the maintenance of a geometric configuration during movement. To address this problem, the concept of a Virtual Structure is introduced. Using this idea, a general control strategy is developed to force an ensemble of robots to behave as if they were particles embedded in a rigid structure. The method was instantiated and tested using both simulation and experimentation with a set of 3 differential drive mobile robots. Results are presented that demonstrate that this approach is capable of achieving high precision movement that is fault tolerant and exhibits graceful degradation of performance. In addition, this algorithm does not require leader selection as in other cooperative robotic strategies. Finally, the method is inherently highly flexible in the kinds of geometric formations that can be maintained. <s> BIB002
Another important formation control approach is the virtual structure method proposed by Tan et al. BIB001 . The virtual structure (VS) as defined in this context is a collection of elements (unmanned vehicles), which maintain a rigid geometric relationship to each other and to a frame of reference BIB002 . The main concept behind the virtual structure is that by treating the formation shape as a VS or a rigid body, the formation is maintained by minimising the position error between the VS and actual formation position. To achieve this, a bi-directional control scheme is proposed in an interacting way that the vehicles are controlled by the virtual force applied to the VS while the positions of VS is determined by the positions of formation. The specific control strategy of the virtual structure method mainly involves three stages (see Figure 9 ): r VS position alignment (stage 1 ): before moving the formation to the next point, a position error based upon the projection of the point-to-point error in x-y coordinate may occur between the actual positions of the formation and the corresponding positions in the VS. Hence, at this stage, pre-defined one-to-one mapping is used to minimise such errors by following the equation: where N is the total number of vehicles in the formation, the displacement of the VS is determined not only by the mission requirement but also the dynamic characteristics of the vehicles. The displacement needs to be appropriately calculated such that the vehicle can reach it in the next time step. r Formation movement (stage 3 ): based upon the new position of the VS, each vehicle in the formation can now move towards its new position by referring to its corresponding point in the VS. A control input is generated for each vehicle, and to achieve more precise tracking performance, the vehicle is first controlled to alter its heading to the desired orientation and then transits towards the target point. Compared with the leader-follower approach, one of the most appealing advantages of the virtual structure method is an increase in fault-tolerant capability. In leader-follower control, due to the lack of feedback of positions of each vehicle in the formation, a faulty vehicle will not be detected by other vehicles, causing the formation to disintegrate. However, such a drawback can be overcome by using the virtual structure approach. It has been proven in Lewis and Tan 54 that the tracking error caused by the faulty robots can be compensated for by other robots in the VS alignment stage so that the formation can be retained (see Figure 10 ). It should be noted that such formation maintenance is only a temporary solution as the faulty vehicle has not been repaired. To achieve comprehensive fault-tolerance, some high-level decision processes are needed to either change the formation shape or call up a new vehicle to replace the faulty unit. Like the leader-follower approach, a robust communication channel is vital for the virtual structure method as each vehicle is highly dependent on the information being exchanged to obtain real-time navigation data. In the work of Do and Pan 55 , such a problem has been addressed by introducing the communication limitation through a potential function. Suppose the designed controller for the i th vehicle is u i , which was calculated not only related to its own position and velocity, but also the communication range, which was described as a potential function β ij as: where d ij is the distance between i th and j th formation agent, and d com is the predefined communication range. It can be observed that if the distance between two vehicles is larger than the communication range, the potential value β ij is zero and thereby the Fig. 10 : Fault-tolerant formation control by using the virtual structure method. During a robot failure, the other robots adjust their paths to maintain formation. The new formation has the correct formation shape as well as the desired orientation BIB002 . designed u i is not dependent on j th agent information. Based on this, further work has been carried out by Do 56 to solve the USV formation control problem. An elliptical shape was adopted to simulate the dimension of the vessel with a circular area centred on the mid point of the vessel representing the communication range.
A survey of formation control and motion planning of multiple unmanned vehicles <s> Behaviour-based formation control <s> In order to achieve formation control, an individual robot adopts the motor schema-based architecture and four primitive behaviors are introduced including: move to goal, keep formation, avoid static obstacle and avoid robot behavior. The behavioral decision to direct the movement of robots is made by the combination of primitive behaviors. The genetic algorithm is used in this paper to solve the problem of selecting control parameters that underlie the behaviors because of its difficulty. The simulation results obtained testify the feasibility of the proposed approach. <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Behaviour-based formation control <s> This paper explores the application of the behavior-based approach to path planning for multiple mobile robots performing a formation control task in unknown environments. To predict the positions of moving obstacles for the purpose of collision avoidance, parabola prediction model whose parameters are estimated by the recurrence least square algorithm with restricted scale is adopted. Then, on the basis of the task and environment, we adopt five primitive behaviors and design a series of generation functions to generate control parameters for behaviors' combination. Furthermore, as the outputs of these functions can be adjusted according to the current situation, thus robots can achieve a motion strategy by reasonably combining behaviors and the adaptability to the environment is improved. We illustrate the validity of the approach by the simulations. <s> BIB002 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Behaviour-based formation control <s> We investigate formation control of a group of unicycle-type mobile robots at the dynamics level with a little amount of inter-robot communication. A combination of the virtual structure and path-tracking approaches is used to derive the formation architecture. Each individual robot has only position and orientation available for feedback. For each robot, a coordinate transformation is first derived to cancel the velocity quadratic terms. An observer is then designed to globally exponentially/asymptotically estimate the unmeasured velocities. An output feedback controller is designed for each robot. The controller is designed in such a way that the path derivative is left as a free input to synchronize the robots' motion. Simulations illustrate the soundness of the proposed controller. <s> BIB003 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Behaviour-based formation control <s> To celebrate the 40th Anniversary of the Oceanic Engineering Society (OES) at the MTS/IEEE OCEANS 2008 Conference in Quebec City a series of review papers were requested from OES technical committee chairs. In response to that request this paper provides a review of the field of unmanned surface vehicles (USVs) and autonomous surface craft (ASCs). The paper discusses the enabling technologies that have allowed USVs to emerge as a viable platform for marine operations as well as the application areas where they offer value. The paper tracks developments in technology from early systems developed by the author in 1993 through the latest developments and demonstration programs. The future outlook for USV technology is also described. <s> BIB004 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Behaviour-based formation control <s> In this paper, we investigate the formation control of multiple mobile robots in an unknown environment with obstacles. A hybrid approach based on leader-follower scheme is proposed for formation control. The approach, using the relative position bias between virtual formation targets and follower robots, includes the active behavior-based control strategy and the virtual target structure. Furthermore, a supervision mechanism is designed to ensure the formation integrity. Real-world experiments are performed in both the formation control and the obstacle avoidance, using three nonholonomic mobile robots, and the results demonstrate the feasibility and validity of the proposed control approach. <s> BIB005
Behaviour-based formation control was first proposed by Balch and Arkin 57 . It solves the formation control problem by using a hybrid vector-weighted control function, which is able to generate the control command based upon various kinds of formation missions. For example, according to the general mission requirements, four different control schemes (behaviours) were developed as move-to-goal (u M G ), avoid-static-obstacle (u AO ), avoidrobot (u AR ) and maintain-formation (u M F ) schemes. Each scheme was assigned with a gain value according to the specific mission or traffic environment, and the final control scheme was determined as the weighted combination of these gains by: where a 1 , a 2 , a 3 , a 4 are the weighting gains for controllers with high gain value representing high importance for the corresponding behaviour. By implementing behaviour-based formation control, not only the formation generation and keeping, but also the collision avoidance can be simultaneously solved. It makes such a control approach superior to the other approaches in terms of practical application. However, in essence, the designed controller is not based upon kinematic/dynamic characteristics of the vehicles, thus the mathematical proof of system stability is highly complex, which makes it hard to theoretically justify the performance of this approach BIB003 . Despite this, the behaviour-based formation control is still of great importance, and a number of studies have adopted such an approach. In the work of Cao et al. BIB001 , the genetic algorithm was integrated with the behaviourbased formation control to assist the determination of weighting gain values of each behaviour. The simulation results show that besides improved control performance, the formation also presented a certain adaptability in an unknown environment by optimising the gain values detailed in Equation BIB004 . Later, Cao et al. BIB002 investigated formation control in an unknown environment with moving obstacles. A prediction model based upon the recurrence least square algorithm was used to estimate the position of a moving obstacle, and a new behaviour named the random behaviour was established to operate in conjunction with the conventional four behaviours, to handle the unstable state occurring in a cluttered environment. Since it is hard to mathematically analyse the formation stability by using the behaviour-based method, a hybrid control scheme which includes both the leader-follower and the behaviour-based methods was proposed by Yang et al. BIB005 . The formation was generated and maintained by the leader-follower while the behaviour-based scheme specifically focused on the motion planning of individual vehicles. A supervision mechanism has been built between the leader and followers such that the formation integrity can be ensured when the number of controlled vehicles changes. The supervision is achieved in a way that an inter connection between the leader and each follower is established so that the leader can have a full-time monitoring of the status of followers.
A survey of formation control and motion planning of multiple unmanned vehicles <s> Discussion on formation control strategies <s> In this paper, a hybrid fault detection, isolation, and recovery (FDIR) methodology is developed for a team of unmanned vehicles which takes advantage of the cooperative nature of the system to accomplish the desired mission in presence of failures. The proposed methodology is hybrid and consists of a low level (agent level) and a high level (team level) FDIR. The high level FDIR is formulated in the discrete-event system (DES) supervisory control framework, whereas the low level FDIR uses the classical control techniques. By properly integrating the two FDIR components, a larger class of faults can be detected and isolated when compared to results in the literature. A reconfiguration strategy is also designed so that the team is recovered from faults. Simulation results are provided to elucidate the efficacy of the proposed approach. <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Discussion on formation control strategies <s> Abstract This paper studies the target aggregation problem for a class of nonlinear multi-agent systems with the time varying interconnection topology. The general neighboring rule-based linear cooperative protocol is developed and a sufficient aggregation condition is derived. Moreover, it is shown that in the presence of agent faults, the target point is still reached by adjusting some weights of the cooperative protocol without changing the structure of the topology. An unmanned aerial vehicle team example illustrates the efficiency of the proposed approach. <s> BIB002 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Discussion on formation control strategies <s> This paper introduces and develops an optimal hybrid fault recovery methodology for a team of unmanned vehicles by taking advantage of the cooperative nature of the team to accomplish the desired mission requirements in presence of faults/failures. The proposed methodology is developed in a hybrid framework that consists of a low-level (an agent level and a team level) and a high-level (discrete-event system level) fault diagnosis and recovery modules. A high-level fault recovery scheme is proposed within the discrete-event system (DES) supervisory control framework, whereas it is assumed that a low-level fault recovery designed based on classical control techniques is already available. The low-level recovery module employs information on the detected and estimated fault and modifies the controller parameters to recover the team from the faulty condition. By taking advantage of combinatorial optimization techniques, a novel reconfiguration strategy is proposed and developed at the high-level so that the faulty vehicles are recovered with minimum cost to the team. A case study is provided to illustrate and demonstrate the effectiveness of our proposed approach for the icing problem in unmanned aerial vehicles, which is a well-known structural problem in the aircraft industry. <s> BIB003
In Table I , three main formation control strategies are summarised and compared. From the deployment platforms' perspectives, it shows that the most widely adopted strategy is the leader-follower approach, which has been applied not only on the mobile robot platforms, but every kind of unmanned vehicle platform. The primary reason for such wide scale deployment is probably because the leader-follower approach is relatively simple to design and implement. Such an approach is developed based upon the common concept when managing a group, i.e. a leader is selected from the group to supervise the group while other group members follow the behaviour of the leader . Therefore by using the leader-follower approach, the formation relationship is more explicit than with other approaches. Also, as mentioned in Section 2.2.1, the leader-follower approach adopts a centralised communication strategy, which requires vehicles in the formation to only establish connections with the leader. The overall amount of exchanged information is much less than with a decentralised approach, and as a consequence the communication efficiency is much higher. However, the primary disadvantage of the leader-follower is its high dependence on the leader vehicle's performance. If the leader malfunctions or the communication between the leader and the follower is disrupted, the formation is hard to control and maintain. The virtual structure strategy provides better performance in terms of formation maintenance as the formation is designed to follow the rigid body virtual structure. However, such good performance in formation keeping is not beneficial for formation modification. The change of the formation requires re-design of the virtual structure, which has the potential to increase the computational burden of the formation. The inflexibility in the formation eventually leads to limited capability for dealing with collision avoidance with obstacles, making the virtual structure an unsuitable option for Type 3 formation maintenance (shown in the 'Formation maintenance type' column in Table I ). The behaviour-based control methodology seems to be the most adoptable approach as it is able to accomplish a number of different mission requirements through one control command. But the lack of system stability analysis makes it unsuitable for large scale utilisation. As regards future development, a hybrid control strategy appears to be the trend. No single solution exists that is appropriate for all scenarios. A hybrid approach can be developed such that in the open space, where stabilisation of the system is the priority, the leader-follower and/or the virtual-leader method could be used. When the formation is navigating in a complex environment, the behaviour-based method takes over the control. Another important development for formation control will be the integration with fault-tolerant control. One of the benefits gained from the deployment of unmanned vehicles as a formation is the improved system robustness that comes to the fore if and when vehicles in the formation fail. However, this aspect is generally ignored by much of the research work accomplished thus far. Fortunately, there have already been in-depth publications from Tousi et al. BIB001 , Yang et al. BIB002 and Tousi et al. BIB003 , who have studied the fault tolerance control from a mathematical perspective. There is no doubt that the seamless merging of fault-tolerance control and formation control would dramatically improve the utility of the research.
A survey of formation control and motion planning of multiple unmanned vehicles <s> Review of cooperative formation path planning <s> Abstract This paper describes co-operative path planning of a group of unmanned aerial vehicles (UAVs). The problem undertaken for this study is that of simultaneous arrival on target of a group of UAVs. The problem of path planning is formulated in order to produce feasible (flyable and safe) paths and the solution is divided into three phases. The first phase is that of producing flyable paths, the second is to add extra constraints to produce safe paths that do not collide with other UAV members or with known obstacles in the environment, and the third is to produce paths for simultaneous arrival. In the first phase, Dubins paths with clothoid arcs are used to produce paths for each UAV. These paths are produced using the principles of differential geometry. The second phase manipulate these paths to make them safer by meeting safety constraints: (i) to maintain minimum separation distance, (ii) to produce non-intersection of paths at equal lengths, and (iii) to fly-through intermediate way-points/poses. Finally, in the third phase, the simultaneous arrival is achieved by making all the paths equal in lengths. Some simulation results are given to illustrate the technique. <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Review of cooperative formation path planning <s> Unmanned surface vehicles (USVs) have been deployed over the past decade. Current USV platforms are generally of small size with low payload capacity and short endurance times. To improve effectiveness there is a trend to deploy multiple USVs as a formation fleet. This paper presents a novel computer based algorithm that solves the problem of USV formation path planning. The algorithm is based upon the fast marching (FM) method and has been specifically designed for operation in dynamic environments using the novel constrained FM method. The constrained FM method is able to model the dynamic behaviour of moving ships with efficient computation time. The algorithm has been evaluated using a range of tests applied to a simulated area and has been proved to work effectively in a complex navigation environment. <s> BIB002
In this section, algorithms developed for formation path planning will be grouped, reviewed and analysed accordingly with some typical work listed. It should be noted that alongside formation path planning, there has been research into another emerging path planning method in recent years, the multi-vehicle cooperative path planning. It can be viewed as a weaker challenge than the formation path planning with less conditions; however, solutions for the cooperative path planning are also beneficial with some core algorithms able to assist the formation with minor modifications to the algorithms. In this section, methods for both the formation path planning and the multi-vehicle cooperative path planning are going to be reviewed. The path planning problem is to find a feasible route connecting the start and end points in a collision free space while satisfying a set of constraint conditions. The problem itself can be expressed as an optimisation process subjected to several costs, and for a 2D path planning problem, it can be mathematically written as BIB001 : single P e (x e , y e , ϕ e ) (6) where P s (x s ,y s ,ϕ s ) and P e (x e ,y e ,ϕ e ) denote the start and end point configuration respectively, which include start and end point coordinates and orientation. τ (t) represents the trajectory which is subjected to the cost single . When extending the problem to the multi-vehicle formation path planning, the formulation can be written as: multiple P e,i (x e , y e , ϕ e ) i = 1, 2, .., N where N is the total number of vehicles in the formation and multiple is the cost for multiple vehicles paths. In single vehicle path planning, to obtain the most effective and efficient path, single normally contains the least distance, the highest safety, the minimum energy consumption and so on. However, in contrast, costs for multiple vehicles path planning ( multiple ) are more complicated, and a comparison between single vehicle and multiple vehicles costs has been provided in Liu and Bucknall BIB002 . As shown in Figure 11 , additional costs are explained as: r Internal collision avoidance: as multiple vehicles are simultaneously and cooperatively working, each vehicle becomes a potential collision risk to other vehicles in the same group. To ensure the safety of the group, the internal collision avoidance needs to be addressed; r Formation behaviour: if multiple vehicles are travelling in a formation, the formation behaviours, such as shape keeping and shape changing, are required; r Cooperation behaviour: the cooperation behaviour is the most important factor, which can be expressed in two different forms as the time cooperative behaviour and the time and position cooperative behaviour. Illustrations of these two different forms are displayed in Figure 12 . The first one only imposes time requirements on the final trajectories, i.e. by following planned trajectories, each vehicle within the group should leave and arrive at each mission point simultaneously or in order. Since no formation behaviour is represented en route except the start and end points, the path planning problem involving such behaviour is known as the multi-vehicle cooperative path planning. In contrast, the second form not only places the requirement on time but also on instantaneous position of each vehicle. Generated trajectories should, to the most extent, maintain the predefined distances between each other thereby solving the formation path planning problem; BIB002 . r Total distance: to achieve the most efficient outcome, the total distance of all trajectories should be optimised.
A survey of formation control and motion planning of multiple unmanned vehicles <s> General methods <s> Efficient marine navigation through obstructions is still one of the many problems faced by the mariner. Many accidents can be traced to human error, recently increased traffic densities and the average cruise speed of ships impedes the collision avoidance decision making process further in the sense that decisions have to be made in reduced time. It seems logical that the decision making process be computerised and automated as a step forward to reduced the risk of collision. This article reviews the development of collision avoidance techniques and path planning for ships, particularly when engaged in close range encounters. In addition, previously published works have been categorised and their shortcomings highlighted in order to identify the 'state of the art' and issues in close range marine navigation. <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> General methods <s> Effective and practical collision avoidance manoeuvres through traffic are still one of the major problems hindering the development and adoption of a fully autonomous vessel. There have been studies on the subject but the majority only consider the traffic from a single vessel perspective while the others utilised stochastic based algorithms which are not suitable for marine navigation which demands consistency. This paper describes the development of a deterministic path planning algorithm that computes a practical and COLREGS compliant navigation path for vessels which are on a collision course. The algorithm was evaluated with a set of test cases, simulating various traffic scenarios. Different aspects of the algorithm, such as the output consistency from different perspectives, practicality of the navigation path, computational performance as well as future work, are discussed. <s> BIB002
Specific methods Searching characteristics Fig. 13 : The categorising of path planning algorithms based upon deterministic and heuristic approaches. algorithm stochastically searches within the space, the consistency of the results is not as good as those delivered by the deterministic method BIB002 . Typical heuristic algorithms include the evolutionary algorithm (EA) such as the genetic algorithm, the particle swarm optimisation and the ant colony optimisation BIB001 . Another promising classification strategy for path planning algorithms, as proposed in Sharma et al. , is to evaluate the algorithm depending upon if it has been developed in a deliberative or reactive way. For example, when the environment is partially known to the vehicle, algorithms can only generate the trajectory within a certain area and therefore has to constantly and reactively update the trajectory as the vehicle is navigating; hence, such a strategy is regarded as a reactive approach. Conversely, when the environment is fully mapped a deliberative approach is adopted and in this case, the generated trajectory is able to provide full guidance information to the vehicle and is always used as the global reference path. In Figure 14 , favourable path planning algorithms have been re-grouped based upon reactive or deliberative approaches. It can be seen that compared with Figure 13 , evolutionary algorithms and roadmap/grid based algorithms now belong to the deliberative approach; whereas, potential field algorithms are grouped in the reactive category together with the optimisation method (especially the model predictive control). In the following sections, literature in regard to multiple vehicles path planning is going to be reviewed based upon the adopted searching methodology.
A survey of formation control and motion planning of multiple unmanned vehicles <s> The potential field method <s> A novel formulation of the artificial potential approach to the obstacle avoidance problem for a mobile robot or a manipulator in a known environment is presented. Previous formulations of artificial potentials for obstacle avoidance have exhibited local minima in a cluttered environment. To build an artificial potential field, the authors use harmonic functions that completely eliminate local minima even for a cluttered environment. The panel method is used to represent arbitrarily shaped obstacles and to derive the potential over the whole space. Based on this potential function, an elegant control strategy for the real-time control of a robot is proposed. Simulation results are presented for a bar-shaped mobile robot and a three-degree-of-freedom planar redundant manipulator. > <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> The potential field method <s> Motion planning, or goal-oriented, context-sensitive, intelligent control is essential if an agent is to act in a useful manner. This paper suggests a new class of motion planners that can mark a constrained trajectory to a target zone in an environment that need not necessarily be a priori known. The novelty of the suggested planner lies in its ability to enforce region avoidance and direction satisfaction constraints jointly. To the best of the authors' knowledge, this is the first time that directional constraints have been addressed in the motion planning literature. To build such a planner, the potential field approach is used for inducing the control action. In addition, to cope with the presence of the above constraints (in particular, the directional constraints), a new type of potential field, called the nonlinear anisotropic harmonic potential field, is suggested. The planner has applications in traffic management and operations research among others. Development of the approach, proofs of correctness, and simulation results are supplied. <s> BIB002 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> The potential field method <s> An autonomous navigation algorithm for marine vehicles is proposed in this paper us- ing fuzzy logic under COLREG guidelines. The VFF (Virtual Force Field) method, which is widely used in the field of mobile robotics, is modified for application to the autonomous navi- gation of marine vehicles. This Modified Virtual Force Field (MVFF) method can be used in ei- ther track-keeping or collision avoidance modes. Moreover, the operator can select a track- keeping pattern mode in the proposed algorithm. The collision avoidance algorithm has the abil- ity to handle static and/or moving obstacles. The fuzzy expert rules are designed deliberately un- der COLREG guidelines. An extensive simulation study is used to verify the proposed method. <s> BIB003 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> The potential field method <s> Based on the double integrator mathematic model, a new kind of potential function is presented in this paper by referring to the concepts of the electric field; then a new formation control method is proposed, in which the potential functions are used between agent-agent and between agent-obstacle, while state feedback control is applied for the agent and its goal. This strategy makes the whole potential field simpler and helps avoid some local minima. The stability of this combination of potential functions and state feedback control is proven. Some simulations are presented to show the rationality of this control method. <s> BIB004 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> The potential field method <s> This paper presents a method to represent complex shaped obstacles in harmonic potential fields used for vehicle path planning. The proposed method involves calculating the potential field for a series of circular obstacles inserted into the unobstructed potential field. The potential field for the total obstacle is a weighted average of the circular obstacle potential fields. This method explicitly calculates a stream function for the potential field. The need for the stream function is explained for situations involving controlling a dynamic system such as a high speed ground vehicle. The traditional potential field controller is also augmented to take the stream function into account. Simulation results are presented to show the effectiveness of the potential field generation technique and the augmented vehicle controller. <s> BIB005 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> The potential field method <s> Abstract Hybrid-driven underwater glider (HUG) is a new type of autonomous underwater vehicles (AUVs). An HUG fleet can extend the ability of individual vehicles and increase mission efficiency. This paper presents motion planning and obstacle avoidance of multi-HUG formation using the artificial potential field (APF) method and Kane's method. The artificial potential fields used for formation motion control are constructed for particular mission requirement, ocean environment, and formation geometry. Multi-HUG formation with the artificial potential fields is regarded as a multibody system, in which HUGs are constrained by the virtual forces derived from the artificial potential fields. Kane's method for dynamic analysis of multibody systems is used to study the dynamic characteristics of motion of the multi-HUG formation. Combination of the APF method and Kane's method offers the advantage that coordinated control and motion planning of the formation can be implemented simultaneously. A case of a three-HUG formation is provided to demonstrate the methodology for clarity. <s> BIB006 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> The potential field method <s> This paper presents the application of the Voronoi Fast Marching (VFM) method to path planning of mobile formation robots. The VFM method uses the propagation of a wave (Fast Marching) operating on the world model to determine a motion plan over a viscosity map (similar to the refraction index in optics) extracted from the updated map model. The computational efficiency of the method allows the planner to operate at high rate sensor frequencies. This method allows us to maintain good response time and smooth and safe planned trajectories. The navigation function can be classified as a type of potential field, but it has no local minima, it is complete (it finds the solution path if it exists) and it has a complexity of order n(O(n)), where n is the number of cells in the environment map. The results presented in this paper show how the proposed method behaves with mobile robot formations and generates trajectories of good quality without problems of local minima when the formation encounters non-convex obstacles. <s> BIB007 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> The potential field method <s> This paper presents a novel algorithm to solve the robot formation path planning problem working under uncertainty conditions such as errors the in robot's positions, errors when sensing obstacles or walls, etc. The proposed approach provides a solution based on a leader-followers architecture (real or virtual leaders) with a prescribed formation geometry that adapts dynamically to the environment. The algorithm described herein is able to provide safe, collision-free paths, avoiding obstacles and deforming the geometry of the formation when required by environmental conditions (e.g. narrow passages). To obtain a better approach to the problem of robot formation path planning the algorithm proposed includes uncertainties in obstacles' and robots' positions. The algorithm applies the Fast Marching Square (FM^2) method to the path planning of mobile robot formations, which has been proved to work quickly and efficiently. The FM^2 method is a path planning method with no local minima that provides smooth and safe trajectories to the robots creating a time function based on the properties of the propagation of the electromagnetic waves and depending on the environment conditions. This method allows to easily include the uncertainty reducing the computational cost significantly. The results presented here show that the proposed algorithm allows the formation to react to both static and dynamic obstacles with an easily changeable behavior. <s> BIB008
The artificial potential field (APF) method was first proposed by Khatib 70 to control a robot manipulator. The method converts the configuration space into the potential field, which consists of an attractive field (U att ) around the target point and repulsive fields (U rep ) around obstacles. The attractive field is proportional to the distance to the target point and is influential over the whole space; whereas the repulsive fields are inversely proportional to the distance to the obstacles and are only effective in certain areas around obstacles. The path is calculated by following the total force at each location, which is the gradient of the sum of fields as: Obstacle Target Unmanned vehicle formation Fig. 15 : The formation path planning using the APF. An internal attractive potential field first needs to be constructed to maintain the formation shape (shown as the red line ). Internal repulsive fields are also needed to prevent two vehicles from moving too close and colliding with each other (shown as the blue line) BIB003 . In terms of the implementation of the APF in formation path planning, in addition to the attractive and repulsive fields, new fields are needed to represent cooperative formation behaviours. An internal attractive potential field first needs to be constructed to maintain the formation shape (shown as the red line in Figure 15 ) such that when a vehicle is away from its formation position, the force is capable of dragging it back to prevent destruction of the formation shape. In addition, internal repulsive fields are also needed to prevent two vehicles from moving too close and colliding with each other (blue line in Figure 15 ). Wang et al. BIB004 constructed such potential fields by referring to the concepts of electric fields. Each vehicle was treated as a point source in the electric field with varying electrical polarity. If the distance between vehicles was larger than the expected value, opposite charges were used to attract them to move towards each other; otherwise, like polarities were used to prevent them from colliding if the distance between vehicles was less than the expected value. Paul et al. also built the fields to solve the problem of UAV formation path planning. To increase control accuracy, an attractive potential field was a function of the error value between desired distance and actual distance, such that any deflection from the desired position can be quickly modified and corrected. Yang et al. BIB006 published work on motion planning for an AUV formation in an environment with obstacles based upon the APF. The algorithm concentrated on overall mission requirements instead of the development of an individual vehicle's control law and treated the AUV formation as a multi-body system with each vehicle modelled as a point mass with full actuation. Potential fields for formation path planning were constructed for particular mission requirements, ocean environment and formation geometry. It should be noted that the primary disadvantage of using the APF is the local minima problem. It is caused by the sum of total forces at certain point equalling zero, which results in the vehicle becoming 'trapped' at that point. Many researchers solved this problem by constructing new kinds of fields such as the Harmonic Potential Fields BIB001 BIB002 BIB005 , which is constructed by the harmonic function containing no local minima. Recently, another effective way to deal with the local minima problem was reported in Garrido et al. BIB007 and Gomez et al. BIB008 , which employed the Fast Marching Method (FMM) to construct the potential field. Differing from the conventional way of combining all fields to generate the total potential field; the FMM produces the potential field by simulating the propagation of an electromagnetic wave. A propagation index ranging from 0 to 1 is first calculated at each point to indicate the speed of propagation of the wave, i.e. 0 value means the wave cannot pass and hence is given to obstacle area. The wave then emits from the start point by obeying the propagation index and stops when the target point is reached. The generated potential field represents the local arrival time of the wave and only has the minima potential at the start point.
A survey of formation control and motion planning of multiple unmanned vehicles <s> 79 <s> Highlights? Co- evolutionary genetic programming is used for multi-robot motion planning. ? Wait for robot feature is introduced for characteristic scenarios. ? Local optimizations are provided by using lookup table. ? Speed of robot plays an important role in computation of optimal path. Motion planning for multiple mobile robots must ensure the optimality of the path of each and every robot, as well as overall path optimality, which requires cooperation amongst robots. The paper proposes a solution to the problem, considering different source and goal of each robot. Each robot uses a grammar based genetic programming for figuring the optimal path in a maze-like map, while a master evolutionary algorithm caters to the needs of overall path optimality. Co-operation amongst the individual robots' evolutionary algorithms ensures generation of overall optimal paths. The other feature of the algorithm includes local optimization using memory based lookup where optimal paths between various crosses in map are stored and regularly updated. Feature called wait for robot is used in place of conventionally used priority based techniques. Experiments are carried out with a number of maps, scenarios, and different robotic speeds. Experimental results confirm the usefulness of the algorithm in a variety of scenarios. <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> 79 <s> This paper presents a Co-evolutionary Improved Genetic Algorithm (CIGA) for global path planning of multiple mobile robots, which employs a co-evolution mechanism together with an improved genetic algorithm (GA). This improved GA presents an effective and accurate fitness function, improves genetic operators of conventional genetic algorithms and proposes a new genetic modification operator. Moreover, the improved GA, compared with conventional GAs, is better at avoiding the problem of local optimum and has an accelerated convergence rate. The use of a co-evolution mechanism takes into full account the cooperation between populations, which avoids collision between mobile robots and is conductive for each mobile robot to obtain an optimal or near-optimal collision-free path. Simulations are carried out to demonstrate the efficiency of the improved GA and the effectiveness of CIGA. <s> BIB002
proposed a coevolving and cooperating path planner for multiple UAVs based upon the GA. In order to make the generated path practical, dynamic characteristics constraints such as the minimum path led length, the minimum flying height, and the maximum climbing angle were incorporated into the algorithm. However, the computation speed was not fast enough to make the algorithm applicable for real-time planning. Hence, Kala BIB001 and Qu et al. BIB002 improved it by introducing new evolution operators to increase the convergence speed of the algorithm.
A survey of formation control and motion planning of multiple unmanned vehicles <s> Optimal control method <s> This paper extends a recently developed approach to optimal path planning of autonomous vehicles, based on mixed integer linear programming (MILP), to account for safety. We consider the case of a single vehicle navigating through a cluttered environment which is only known within a certain detection radius around the vehicle. A receding horizon strategy is presented with hard terminal constraints that guarantee feasibility of the MILP problem at all future time steps. The trajectory computed at each iteration is constrained to end in a so called basis state, in which the vehicle can safely remain for an indefinite period of time. The principle is applied to the case of a UAV with limited turn rate and minimum speed requirements, for which safety conditions are derived in the form of loiter circles. The latter need not be known ahead of time and are implicitly computed online. An example scenario is presented that illustrates the necessity of these safety constraints when the knowledge of the environment is limited and/or hard real-time restrictions are given. <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Optimal control method <s> The goal of adaptive sampling in the ocean is to predict the types and locations of additional ocean measurements that would be most useful to collect. Quantitatively, what is most useful is defined by an objective function and the goal is then to optimize this objective under the constraints of the available observing network. Examples of objectives are better oceanic understanding, to improve forecast quality, or to sample regions of high interest. This work provides a new path-planning scheme for the adaptive sampling problem. We define the path-planning problem in terms of an optimization framework and propose a method based on mixed integer linear programming (MILP). The mathematical goal is to find the vehicle path that maximizes the line integral of the uncertainty of field estimates along this path. Sampling this path can improve the accuracy of the field estimates the most. While achieving this objective, several constraints must be satisfied and are implemented. They relate to vehicle motion, intervehicle coordination, communication, collision avoidance, etc. The MILP formulation is quite powerful to handle different problem constraints and flexible enough to allow easy extensions of the problem. The formulation covers single- and multiple-vehicle cases as well as single- and multiple-day formulations. The need for a multiple-day formulation arises when the ocean sampling mission is optimized for several days ahead. We first introduce the details of the formulation, then elaborate on the objective function and constraints, and finally, present a varied set of examples to illustrate the applicability of the proposed method. <s> BIB002 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Optimal control method <s> Recently, there has been growing interest in developing unmanned aircraft systems (UAS) with advanced onboard autonomous capabilities. This paper describes the current state of the art in autonomous rotorcraft UAS (RUAS) and provides a detailed literature review of the last two decades of active research on RUAS. Three functional technology areas are identified as the core components of an autonomous RUAS. Guidance, navigation, and control (GNC) have received much attention from the research community, and have dominated the UAS literature from the nineties until now. This paper first presents the main research groups involved in the development of GNC systems for RUAS. Then it describes the development of a framework that provides standard definitions and metrics characterizing and measuring the autonomy level of a RUAS using GNC aspects. This framework is intended to facilitate the understanding and the organization of this survey paper, but it can also serve as a common reference for the UAS community. The main objective of this paper is to present a comprehensive survey of RUAS research that captures all seminal works and milestones in each GNC area, with a particular focus on practical methods and technologies that have been demonstrated in flight tests. These algorithms and systems have been classified into different categories and classes based on the autonomy level they provide and the algorithmic approach used. Finally, the paper discusses the RUAS literature in general and highlights challenges that need to be addressed in developing autonomous systems for unmanned rotorcraft. © 2012 Wiley Periodicals, Inc. © 2012 Wiley Periodicals, Inc. <s> BIB003 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Optimal control method <s> This paper presents an efficient and feasible algorithm for the path planning problem of the multiple unmanned aerial vehicles (multi-UAVs) formation in a known and realistic environment. The artificial potential field method updated by the additional control force is used for establishing two models for the single UAV, which are the particle dynamic model and the path planning optimization model. The additional control force can be calculated by using the optimal control method. Furthermore, the multi-UAV path planning model is established by introducing “virtual velocity rigid body” and “virtual target point”. Then, the motion states of the lead plane and wingmen are obtained from the path planning model. Finally, the path following process based on the quadrotor helicopter PID controllers is introduced to verify the rationality of the path planning results. The simulation results show that the artificial potential method with the additional control force improved by the optimal control method has a good path planning ability for the single UAV and the all UAVs formation. At the same time, the path planning results are available and the UAVs can basically track the UAV formation. <s> BIB004
Using the optimal control method is another main approach for multiple vehicle cooperative path planning. This approach considers the path planning problem as a numerical optimisation problem by following a set of constraints BIB003 . It breaks down the multiple vehicle path planning into several single vehicle path planning processes, and A general form of using the optimal control method for multiple vehicle path planning was reported in Schouwenaars et al. BIB001 . The group consisted of a number of N vehicles, and for the p th vehicle in the group, a fuel-optimal cost function was first defined as: where s pi , u pi and s pf denote the state, input and final state of the vehicle. q p , r p and p p are the weighting factors. Constraints for single vehicle optimisation included boundary conditions for the vehicle's state and control inputs, and position constraints to avoid static and moving obstacles. Then, a 'cooperation constraint' was defined, in this case, to keep two vehicles away from each other by a certain distance to maintain the safety as: where (x pi , y pi ) and (x qi , y qi ) are the coordinates for p th and q th vehicle at time step i, and d x and d y are the two safety distances. By subjecting to all constraint conditions, mixed integer linear programming (MILP) was used to find the optimal control input u for each vehicle, which could finally generate a feasible path by substituting it into the system dynamic functions. Yilmaz et al. BIB002 expanded such a method to a larger scale multiple vehicle cooperation such as AUV-USV cooperation, AUV-Shore station cooperation and AUV-AUV cooperation. To achieve these cooperation, constraints were developed to ensure sufficient distances were maintained to keep robust communication. However, even though the MILP is powerful enough to handle different constraints for the optimisation problem, high computation complexity is its main disadvantage and prevents its use for on-line planning. To improve MILP, Bemporad and Rocchi 85 applied receding horizon control (RHC) to solve optimisation problems for UAV formations. Unlike conventional methods which seek for the optimal result for the whole time period, an on-the-fly strategy is used by the RHC to only minimise the cost function for a relatively short horizon in each time step and compute the according control input, which could largely decrease the computation time. Based on such an online scheme, Chen et al. BIB004 designed a formation hybrid formation path planner by combining the RHC and the APF methods for UAVs. An additional control force generated by APF was added to the system control input to improve the collision avoidance capability of the formation.
A survey of formation control and motion planning of multiple unmanned vehicles <s> Discussion on formation path planning <s> This paper addresses path planning to consider a cost function defined over the configuration space. The proposed planner computes low-cost paths that follow valleys and saddle points of the configuration-space costmap. It combines the exploratory strength of the Rapidly exploring Random Tree (RRT) algorithm with transition tests used in stochastic optimization methods to accept or to reject new potential states. The planner is analyzed and shown to compute low-cost solutions with respect to a path-quality criterion based on the notion of mechanical work. A large set of experimental results is provided to demonstrate the effectiveness of the method. Current limitations and possible extensions are also discussed. <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Discussion on formation path planning <s> In this paper we present LQG-MP (linear-quadratic Gaussian motion planning), a new approach to robot motion planning that takes into account the sensors and the controller that will be used during the execution of the robot’s path. LQG-MP is based on the linear-quadratic controller with Gaussian models of uncertainty, and explicitly characterizes in advance (i.e. before execution) the a priori probability distributions of the state of the robot along its path. These distributions can be used to assess the quality of the path, for instance by computing the probability of avoiding collisions. Many methods can be used to generate the required ensemble of candidate paths from which the best path is selected; in this paper we report results using rapidly exploring random trees (RRT). We study the performance of LQG-MP with simulation experiments in three scenarios: (A) a kinodynamic car-like robot, (B) multi-robot planning with differential-drive robots, and (C) a 6-DOF serial manipulator. We also present a method that applies Kalman smoothing to make paths Ck-continuous and apply LQG-MP to precomputed roadmaps using a variant of Dijkstra’s algorithm to efficiently find high-quality paths. <s> BIB002 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Discussion on formation path planning <s> This paper presents the application of the Voronoi Fast Marching (VFM) method to path planning of mobile formation robots. The VFM method uses the propagation of a wave (Fast Marching) operating on the world model to determine a motion plan over a viscosity map (similar to the refraction index in optics) extracted from the updated map model. The computational efficiency of the method allows the planner to operate at high rate sensor frequencies. This method allows us to maintain good response time and smooth and safe planned trajectories. The navigation function can be classified as a type of potential field, but it has no local minima, it is complete (it finds the solution path if it exists) and it has a complexity of order n(O(n)), where n is the number of cells in the environment map. The results presented in this paper show how the proposed method behaves with mobile robot formations and generates trajectories of good quality without problems of local minima when the formation encounters non-convex obstacles. <s> BIB003 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Discussion on formation path planning <s> In this paper, a novel method for robot navigation in dynamic environments, referred to as visibility binary tree algorithm, is introduced. To plan the path of the robot, the algorithm relies on the construction of the set of all complete paths between robot and target taking into account inner and outer visible tangents between robot and circular obstacles. The paths are then used to create a visibility binary tree on top of which an algorithm for shortest path is run. The proposed algorithm is implemented on two simulation scenarios, one of them involving global knowledge of the environment, and the other based on local knowledge of the environment. The performance are compared with three different algorithms for path planning. <s> BIB004 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Discussion on formation path planning <s> This paper presents a novel algorithm to solve the robot formation path planning problem working under uncertainty conditions such as errors the in robot's positions, errors when sensing obstacles or walls, etc. The proposed approach provides a solution based on a leader-followers architecture (real or virtual leaders) with a prescribed formation geometry that adapts dynamically to the environment. The algorithm described herein is able to provide safe, collision-free paths, avoiding obstacles and deforming the geometry of the formation when required by environmental conditions (e.g. narrow passages). To obtain a better approach to the problem of robot formation path planning the algorithm proposed includes uncertainties in obstacles' and robots' positions. The algorithm applies the Fast Marching Square (FM^2) method to the path planning of mobile robot formations, which has been proved to work quickly and efficiently. The FM^2 method is a path planning method with no local minima that provides smooth and safe trajectories to the robots creating a time function based on the properties of the propagation of the electromagnetic waves and depending on the environment conditions. This method allows to easily include the uncertainty reducing the computational cost significantly. The results presented here show that the proposed algorithm allows the formation to react to both static and dynamic obstacles with an easily changeable behavior. <s> BIB005 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Discussion on formation path planning <s> In this paper, a new path planning method for robots used in outdoor environments is presented. The proposed method applies Fast Marching to a 3D surface represented by a triangular mesh to calculate a smooth trajectory from one point to another. The method uses a triangular mesh instead of a square one since this kind of grid adapts better to 3D surfaces. The novelty of this approach is that, before running the algorithm, the method calculates a weight matrix W based on the information extracted from the 3D surface characteristics. In the presented experiments these features are the height, the spherical variance, and the gradient of the surface. This matrix can be viewed as a difficulty map situated over the 3D surface and is used to limit the propagation speed of the Fast Marching wave in order to find the best path depending on the task requirements, e.g., the least energy consumption path, the fastest path, or the most plain terrain. The algorithm also gives the speed for the robot, which depends on the wave front propagation speed. The results presented in this paper show how, by varying this matrix W, the paths obtained are different. Moreover, as it is shown in the experimental part, this algorithm is also useful for calculating paths for climbing robots in much more complex environments. Finally, at the end of the paper, it is shown that this algorithm can also be used for robot avoidance when two robots approach each other, and they know each other's position. <s> BIB006 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Discussion on formation path planning <s> An autonomous mobile robot in a human's living space should be able to realize not only collision-free motion, but also human-centered motion, i.e., motion giving priority to a moving human according to the situation. In this study, we propose a real-time obstacle avoidance method for an autonomous mobile robot that considers the robot's dynamic constraints, personal space, and the human's directional area using grid-based X - Y - T space path planning. The proposed method generates collision-free motion in which the robot can give way to humans. The relative position, velocity and avoidance motion with respect to the robot varies from person to person. To show the effectiveness of the proposed method to human-like motion, we verify the robot's motion under several assumed scenarios by changing the initial state of both the robot and the human. Moreover, we verify the robot's motion with respect to a simulated human based on the human-like behavior approach. Through these simulations, we confirm that the proposed method is able to generate safe human-centered motion under several assumed scenarios. Additionally, the effectiveness of the proposed method in practice is confirmed by experiments in which the human's position and velocity are estimated using a laser range finder. This paper presents safe human-centered navigation for a mobile robot.Our approach is X - Y - T space path planning considering the dynamic constraints.We provide collision-free motion in which the robot can give way to humans.The effectiveness of the proposed method is confirmed by simulations and experiments. <s> BIB007 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Discussion on formation path planning <s> An unmanned aerial vehicle (UAV) dynamic path planning method is proposed to avoid not only static threats but also mobile threats. The path of a UAV is planned or modified by the potential trajectory of the mobile threat, which is predicted by its current position, velocity, and direction angle, because the positions of the UAV and mobile threat are dynamically changing. In each UAV planning path, the UAV incurs some costs, including control costs to change the direction angle, route costs to bypass the threats, and threat costs to acquire some probability to be destroyed by threats. The model predictive control (MPC) algorithm is used to determine the optimal or sub-optimal path with minimum overall costs. The MPC algorithm is a rolling-optimization feedback algorithm. It is used to plan the UAV path in several steps online instead of one-time offline to avoid sudden and mobile threats dynamically. Lastly, solution implementation is described along with several simulation results that demonstrate the effectiveness of the proposed method. <s> BIB008 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Discussion on formation path planning <s> Abstract The sampling-based motion planning algorithm known as Rapidly-exploring Random Trees (RRT) has gained the attention of many researchers due to their computational efficiency and effectiveness. Recently, a variant of RRT called RRT* has been proposed that ensures asymptotic optimality. Subsequently its bidirectional version has also been introduced in the literature known as Bidirectional-RRT* (B-RRT*). We introduce a new variant called Intelligent Bidirectional-RRT* (IB-RRT*) which is an improved variant of the optimal RRT* and bidirectional version of RRT* (B-RRT*) algorithms and is specially designed for complex cluttered environments. IB-RRT* utilizes the bidirectional trees approach and introduces intelligent sample insertion heuristic for fast convergence to the optimal path solution using uniform sampling heuristics. The proposed algorithm is evaluated theoretically and experimental results are presented that compares IB-RRT* with RRT* and B-RRT*. Moreover, experimental results demonstrate the superior efficiency of IB-RRT* in comparison with RRT* and B-RRT in complex cluttered environments. <s> BIB009
Formation path planning, working as a command generator for the formation control system (referring to Figure 3) , takes the description of the environment as the input and produces sets of waypoints as trajectories. The artificial potential field method, the evolutionary algorithm and the optimal control method are three mainstream approaches used for multi-vehicle path planning, and a comparison of these approaches is listed in Table II . Among them, the potential field and the evolutionary based methods are the most widely adopted approaches. These are significantly different from single vehicle path planning, where the grid based BIB007 or the road map based methods BIB004 BIB009 BIB001 BIB002 are preferred. A possible reason is the multiple-vehicle system needs a path planning algorithm to have fast computation speed as a number of vehicles are involved; however, both the road map and the grid based methods need significant memory capacity to store the environment information, which has the potential to decrease the speed of the algorithm. More importantly, the trajectories generated by the potential field or the evolutionary algorithm are more practical than other methods. The potential field method can produce a smooth and continuous path and the evolutionary algorithm is able to optimise the trajectory's costs for different mission requirements. However, the FMM based potential field method may have more advantages than the evolutionary algorithm. First, in terms of the algorithm completeness and consistency, the FMM performs well whereas the evolutionary method lacks consistency and the conventional potential field method is not complete. Second, the FMM is able to achieve various cooperative behaviours. Generated trajectories can either be time cooperative, or time-and-position cooperative, and a 'deformable' formation shape can be easily established, which is difficult to achieve with the other methods. In Gomez et al. BIB005 and Garrido et al. BIB003 , a generic formation path planning algorithm based upon the FMM has been proposed and employed for indoor robot formations. From the simulation results, the formation is able to adjust its shape to avoid complex obstacles such as a narrow pathway. Third, differing from the conventional potential field method of only constructing attractive and repulsive fields; some other fields representing different costs can also be used by the FMM. In Garrido et al. BIB006 , a weighting matrix which addresses different path constraints was used and blended with the potential field to generate the path. The final trajectory was optimised in terms of the least energy consumption, the shortest distance and the plainest terrain. However, some limitations of current multi-vehicle path planning also need be taken into consideration: r The collision avoidance strategies were not effective enough to deal with the complex environments. Most publications used either rigid formations or dynamic formations to avoid the obstacles. However, this may not be the best solution, and in some cases the formation could be partially maintained to seek a more optimised result. For example, Fig. 17 : The split-merge formation collision avoidance strategy. When the formation encounters a small-size obstacle, which only has the collision risk with the vehicle in red, the red vehicle needs to take manoeuvres while the others remain unaffected. as shown in Figure 17 , the split-merge strategy can be adopted when the formation encounters a small sized obstacle. r Formation path planning in an environment with true dynamic obstacles has not been studied. For the purpose of simplicity, most of the dynamic obstacles moved only at slow or constant speed. In reality, such obstacles normally have unpredictable movement patterns, which requires a path planning algorithm to be integrated with advanced sensors and a prediction algorithm. In Yao et al. BIB008 , the Kalman filter was used to predict the path of a moving obstacle in the immediate future, and the path planning algorithm can accordingly adjust the path to avoid the obstacles more effectively.
A survey of formation control and motion planning of multiple unmanned vehicles <s> Conclusion and future research areas <s> The paper presents a state-space perspective on the kinodynamic planning problem, and introduces a randomized path planning technique that computes collision-free kinodynamic trajectories for high degree-of-freedom problems. By using a state space formulation, the kinodynamic planning problem is treated as a 2n-dimensional nonholonomic planning problem, derived from an n-dimensional configuration space. The state space serves the same role as the configuration space for basic path planning. The bases for the approach is the construction of a tree that attempts to rapidly and uniformly explore the state space, offering benefits that are similar to those obtained by successful randomized planning methods, but applies to a much broader class of problems. Some preliminary results are discussed for an implementation that determines the kinodynamic trajectories for hovercrafts and satellites in cluttered environments resulting in state spaces of up to twelve dimensions. <s> BIB001 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Conclusion and future research areas <s> Swarm robotics is a novel approach to the coordination of large numbers of relatively simple robots which takes its inspiration from social insects. This paper proposes a definition to this newly emerg- ing approach by 1) describing the desirable properties of swarm robotic systems, as observed in the system-level functioning of social insects, 2) proposing a definition for the term swarm robotics, and putting for- ward a set of criteria that can be used to distinguish swarm robotics research from other multi-robot studies, 3) providing a review of some studies which can act as sources of inspiration, and a list of promising domains for the utilization of swarm robotic systems. <s> BIB002 </s> A survey of formation control and motion planning of multiple unmanned vehicles <s> Conclusion and future research areas <s> SUMMARY We present a review of recent activities in swarm robotic research, and analyse existing literature in the field to determine how to get closer to a practical swarm robotic system for real world applications. We begin with a discussion of the importance of swarm robotics by illustrating the wide applicability of robot swarms in various tasks. Then a brief overview of various robotic devices that can be incorporated into swarm robotic systems is presented. We identify and describe the challenges that should be resolved when designing swarm robotic systems for real world applications. Finally, we provide a summary of a series of issues that should be addressed to overcome these challenges, and propose directions for future swarm robotic research based on our extensive analysis of the reviewed literature. <s> BIB003
A review of the multiple unmanned vehicles formation system has been presented in this paper. The principle structure of the multi-vehicle system as well as the critical development technologies have been reviewed. In terms of the key research involved in the multi-vehicle formation system, both the formation control and cooperative path planning are important. Even though they are two different research topics, a number of overlaps make them coherent. For example, the problem of collision avoidance, which primarily resides in the path planning problem, has many solutions in formation control literature, having designed and generated necessary control commands to manoeuvre the vehicle. In the meantime, the recent path planning trend is towards the kinodynamic planning, for which velocity, acceleration and force/torque limitations must be satisfied . The paths generated by the kinodynamic planning algorithms are physically compliant with the vehicle's dynamics, which facilitates the controllers to track, and are also able to avoid obstacles in the environment 97;98 . Compared with single platform deployment, a relatively small number of multi-vehicle system deployments have been seen in recent decades. However, there is considerable potential future development for such systems as the multi-vehicle system is more effective and able to undertake complex missions for which single vehicles are incapable. It is without doubt that by fully implementing a formation control and navigation system into current unmanned system platforms, the autonomy and efficiency of unmanned vehicles can be successfully enhanced. To further push the boundary of the research of multi-vehicle systems, extensive work needs to be carried out from both control and path planning perspectives. First, as presented in this review, most of the work only focuses on single types of platforms, and the problem of formation control and path planning for multi-vehicle cross-platform system has not been rigorously addressed. In the future, the dominant approach to deploy multi-vehicle systems may be to use various types of vehicles to cooperatively work together to provide the persistent autonomy. For example, a cross-platform system consisting of UAVs, USVs and AUVs can be deployed for search and rescue missions in post-disaster scenarios, where the UAV provides long-range detection capability, the USV works as a communication relay station and the AUV is responsible for underwater search and detection. To effectively operate such a combined system, new considerations must be given to the development of the associated control algorithms. In terms of formation control, as each type of vehicle has its unique dynamic characteristics, such a system becomes highly heterogeneous and consequently its formation control become more challenging. In addition, when deploying such cross-platform systems to conduct persistent mission, the energy consumption will become a significant limitation and would need to be properly addressed by balancing the energy usage issue with other requirements. With respect to the path planning, computation efficiency is the major issue that needs to be specifically taken into account as a cross-platform system would normally be conducting missions in a 3D environment. Path planning algorithms reviewed in this paper normally belong to grid-based path planning algorithm, which are powerful in dealing with 2D environments but lack effectiveness in 3D. Therefore, the samplingbased algorithm such as the rapidly exploring random tree (RRT) BIB001 can be modified and improved for this application. Another important research area is development of multi-vehicle systems towards the swarm concept. Because the number of vehicles involved in a swarm is far more than that in a formation BIB002 , the required algorithm for operating a swarm is different and more complex. This has therefore led to a phenomenon that a large number of bio-inspired control methods such as insect colonies and flocks of birds, have been adopted as they are capable of providing solutions to the complex problem that conventional approaches cannot address BIB003 . In fact, when developing the algorithm for a swarm, due to the large amount of vehicles, which provides a certain degree of redundancy, new functionality called the obstacle enclosure can be considered as a potential research area. This in fact will be a new way of dealing with moving obstacles. For example, for a conventional formation system, generating the safe evasive actions is always the priority when the formation encounters moving obstacles. However, for a swarm system, instead of avoiding the obstacles, part of the swarm can be used to enclose an obstacle to effectively block its trajectory and delay its movement, while the rest of the swarm can continue to transit towards the target point. It might still be viewed as having accomplished the mission even if not all the vehicles but only part of them arrive at the target point. To successfully implement such a strategy, the choice of the obstacle enclosure time would be critical and should be calculated according to the movement of the obstacle. Also, internal collision within the swarm when performing the enclosure might not be negligible and must be addressed in the algorithm design.
Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> Vehicular networks are experiencing rapid growth and evolution under the increasing demand of vehicular traffic management and ubiquitous network connectivity. In particular, the amount of information to be downloaded from the roadside-deployed gateways is dramatically increasing. Infected by high mobility, intermittent connectivity, and unreliability of the wireless channel, it is challenging to satisfy the need for massive data transmission in vehicular networks. In this paper, we propose a novel protocol called vehicular cooperative media access control (VC-MAC), which utilizes the concept of cooperative communication tailored for vehicular networks, particularly for gateway-downloading scenarios. VC-MAC leverages the broadcast nature of the wireless medium to maximize the system throughput. Spatial diversity and user diversity are exploited by concurrent cooperative relaying to overcome the unreliability of the wireless channel in vehicular networks. We theoretically analyze the selection of an optimal relay set using a weighted independent set (WIS) model and then design a backoff mechanism to select the optimal relays in a distributed manner. We have carried out extensive simulations to demonstrate that VC-MAC effectively enhances cooperative information downloading and significantly increases the system throughput compared with existing strategies. <s> BIB001 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> Although there has been a growing literature on cooperative diversity, the current literature is mainly limited to the Rayleigh fading channel model, which typically assumes a wireless communication scenario with a stationary base station antenna above rooftop level and a mobile station at street level. In this paper, we investigate cooperative diversity for intervehicular communication based on cascaded Nakagami fading. This channel model provides a realistic description of an intervehicular channel where two or more independent Nakagami fading processes are assumed to be generated by independent groups of scatterers around the two mobile terminals. We investigate the performance of amplify-and-forward relaying for an intervehicular cooperative scheme assisted by either a roadside access point or another vehicle that acts as a relay. Our diversity analysis reveals that the cooperative scheme is able to extract the full distributed spatial diversity. We further formulate a power-allocation problem for the considered scheme to optimize the power allocated to the broadcasting and relaying phases. Performance gains up to 3 dB are obtained through optimum power allocation, depending on the relay location. <s> BIB002 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> The use of multiple antennas for wireless communication systems has gained overwhelming interest during the last decade - both in academia and industry. Multiple antennas can be utilized in order to accomplish a multiplexing gain, a diversity gain, or an antenna gain, thus enhancing the bit rate, the error performance, or the signal-to-noise-plus-interference ratio of wireless systems, respectively. With an enormous amount of yearly publications, the field of multiple-antenna systems, often called multiple-input multiple-output (MIMO) systems, has evolved rapidly. To date, there are numerous papers on the performance limits of MIMO systems, and an abundance of transmitter and receiver concepts has been proposed. The objective of this literature survey is to provide non-specialists working in the general area of digital communications with a comprehensive overview of this exciting research field. To this end, the last ten years of research efforts are recapitulated, with focus on spatial multiplexing and spatial diversity techniques. In particular, topics such as transmitter and receiver structures, channel coding, MIMO techniques for frequency-selective fading channels, diversity reception and space-time coding techniques, differential and non-coherent schemes, beamforming techniques and closed-loop MIMO techniques, cooperative diversity schemes, as well as practical aspects influencing the performance of multiple-antenna systems are addressed. Although the list of references is certainly not intended to be exhaustive, the publications cited will serve as a good starting point for further reading. <s> BIB003 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> Vehicular Networks are more and more considerable recently. With the rapid advance of information technology, it becomes easy to support low cost inter-vehicle communication. In particular, the demand for delay sensitive applications, such as streaming media distribution, is increasing. However, due to the high mobility, links between roadside units and wireless nodes are intermittent, unreliable and inefficient. We propose cross-layer cooperative routing (CLCR) for vehicular networks to overcome the unreliability of the wireless channel and maximize the system throughput. Owing to different relay nodes may lead to different results. In this paper, we propose a mechanism to choose appropriate relay nodes in vehicular networks. Furthermore, we extend the lifetime of routing path to reduce the frequency of route rediscovering. Simulation results based on network simulator 2 (ns2) and mobility model generator for vehicular networks (MOVE) show the proposed CLCR has good performance. <s> BIB004 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> Diversity, i.e. transmitting multiple replicas of a signal, may mitigate fading in wireless networks. Among other diversity techniques, the space diversity of multi-antenna systems is particularly interesting since it can complement other forms of diversity. The recent cooperative diversity paradigm brings the advantages of multi-antenna space diversity to single antenna networked devices, which, through cooperation and antenna sharing, form virtual antenna arrays. However, cooperative diversity is a complex technique and research on this topic is still in its early stages. This paper aims at providing a general survey on the theoretical framework; and the physical and medium access control proposals in the literature. <s> BIB005 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> Throughput maximization is a key challenge for wireless applications in cognitive Vehicular Ad-hoc Networks (C-VANETs). As a potential solution, cooperative communications, which may increase link capacity by exploiting spatial diversity, has attracted a lot of attention in recent years. However, if link scheduling is considered, this transmission mode may perform worse than direct transmission in terms of end-to-end throughput. In this paper, we propose a cooperative communication aware link scheduling scheme and investigate the throughput maximization problem in C-VANETs. Regarding the features of cooperative communications and the availability of licensed spectrum, we extend the links into cooperative links/general links, define extended link-band pairs, and form a 3-dimensional (3-D) cooperative conflict graph to characterize the conflict relationship among those pairs. Given all cooperative independent sets in this graph, we mathematically formulate an end-to-end throughput maximization problem and near-optimally solve it by linear programming. Due to the NP-completeness of finding all independent sets, we also develop a heuristic pruning algorithm for cooperative communication aware link scheduling. Our simulation results show that the proposed scheme is effective in increasing end-to-end throughput for the session in C-VANETs. <s> BIB006 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> Authentication is one of the essential frameworks to ensure safe and secure message dissemination in Vehicular Adhoc Networks (VANETs). But an optimized authentication algorithm with reduced computational overhead is still a challenge. In this paper, we propose a novel classification of safety critical messages and provide an adaptive algorithm for authentication in VANETs using the concept of Merkle tree and Elliptic Curve Digital Signature Algorithm (ECDSA). Here, the Merkle tree is constructed to store the hashed values of public keys at the leaf nodes. This algorithm addresses Denial of Service (DoS) attack, man in the middle attack and phishing attack. Experimental results show that, the algorithm reduces the computational delay by 20 percent compared to existing schemes. <s> BIB007 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> Vehicle-to-vehicle (V2V) communications are considered to be a significant step forward toward a highly secure and efficient intelligent transportation system. In this paper, we propose the use of graph theory to formulate the problem of cooperative communications scheduling in vehicular networks. In lieu of exhaustive search with intractable complexity for the maximum sum rate (MSR), we propose a bipartite-graph-based (BG) scheduling scheme to allocate the vehicle-to-infrastructure (V2I) and V2V links for both single-hop and dual-hop communications. The Kuhn–Munkres (KM) algorithm is adopted to solve the problem of maximum weighted matching (MWM) of the constructed BG. Simulation results indicate that the proposed scheme performs extremely close to the optimal scheme and results in better fairness among vehicle users with considerably lower computational complexity. Moreover, cooperative communications can improve both the throughput and spectral efficiency (SE) of vehicular networks. <s> BIB008 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> In vehicular ad hoc networks (VANETs), some distinct characteristics, such as high node mobility, introduce new non-trivial challenges to quality-of-service (QoS) provisioning. Although some excellent works have been done on QoS issues in VANETs, security issues are largely ignored in these works. However, it is know that security always comes at a price in terms of QoS performance degradation. In this article, we consider security and QoS issues jointly for VANETs with cooperative communications. We take an integrated approach of optimizing both security and QoS parameters, and study the tradeoffs between them in VANETs. Specifically, we use recent advances in cooperative communication to enhance the QoS performance of VANETs. In addition, we present a prevention-based security technique that provides both hop-by-hop and end-to-end authentication and integrity protection. We derive the closed-form effective secure throughput considering both security and QoS provisioning in VANETs with cooperative communications. The system is formulated as a partially observable Markov decision process. Simulation results are presented to show that security schemes have significant impacts on the throughput QoS of VANETs, and our proposed scheme can substantially improve the effective secure throughput of VANETs with cooperative communications. <s> BIB009 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> Cooperative communication is a promising and practical technique for realizing spatial diversity through a virtual antenna array formed by multiple antennas of different nodes. There has been a growing interest in designing and evaluating efficient cooperative medium access control (MAC) protocols in recent years. With the objective of translating a cooperative diversity gain at the physical layer to cooperative advantages at the MAC layer, an efficient cooperative MAC protocol should be able to accurately identify a beneficial cooperation opportunity, efficiently select the best relay(s), and coordinate the cooperative transmission at low cost and complexity. However, due to the randomness of channel dynamics, node mobility, and link interference, the design of an efficient cooperative MAC protocol is of great challenge, especially in a wireless multi-hop mobile network. In this article, we aim to provide a comprehensive overview of the existing cooperative MAC protocols according to their specific network scenarios and associated research problems. Three critical issues (i.e., when to cooperate, whom to cooperate with, and how to cooperate) are discussed in details, which should be addressed in designing an efficient cooperative MAC protocol. Open research issues are identified for further research. <s> BIB010 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> (1) State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, China; (2) University of Chinese Academy of Sciences, China <s> BIB011 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> The rapid evolution of wireless communication capabilities and vehicular technology would allow traffic data to be disseminated by traveling vehicles in the near future. Vehicular Ad hoc Networks (VANETs) are self-organizing networks that can significantly improve traffic safety and travel comfort, without requiring fixed infrastructure or centralized administration. However, data dissemination in VANET environment is a challenging task, mainly due to rapid changes in network topology and frequent fragmentation. In this paper, we survey existing data dissemination techniques and their performance modeling approaches in VANETs, along with optimization strategies under two basic models: the push model, and the pull model. In addition, we present major research challenges. <s> BIB012 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> In vehicular ad hoc networks (VANETs) the network services and applications (e.g., safety messages) will require an exchange of vehicle and event location information. Effective lane changing and routing in Vehicular Ad hoc Networks is a challenging task. This paper aims to propose a solution to ensure the safety of drivers while changing lanes on the highways. Efficient and faster routing protocols could play a crucial role in the applications of VANET, safeguarding both the drivers and the passengers and thus maintaining a safe on-road environment. In this paper we propose SBLS: Speed Based Lane Changing System in VANETs, for effective lane changing in the dynamic mobility model. In our approach we present the lane changing based system on speed and minimum gap between the vehicles in VANET. The test bed is created on the techniques used in the proposed system where the analysis takes place in the On Board Embedded System designed for Vehicle Navigation. The designed system was tested on a 4-lane road at Neemrana in India. Successful simulations have been conducted along with real time network parameters to maximize the QoS (quality of service) and performance using SUMO and NS-2. <s> BIB013 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> Efficient intersection control represents a major challenge in traffic management, as it can contribute to reducing traffic congestion and travel times. Communicating vehicles, for instance using VANETs, open up new opportunities for intersection control, providing fairness and throughput to transportation networks. In this paper, we are interested in the tradeoffs between fairness and throughput in intersection control. Our key contributions are (i) novel intersection control algorithms which consider both fairness and throughput, and (ii) a simulative evaluation which compares these algorithms with other solutions. We evaluate the algorithms in a comparative simulation study, using microscopic traffic simulation and considering different traffic demands. <s> BIB014 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> Compute-and-forward (CF) harnesses interference in a wireless networkby allowing relays to compute combinations of source messages. The computed message combinations at relays are correlated, and so directly forwarding these combinations to a destination generally incurs information redundancy and spectrum inefficiency. To address this issue, we propose a novel relay strategy, termed compute-compress-and-forward (CCF). In CCF, source messages are encoded using nested lattice codes constructed on a chain of nested coding and shaping lattices. A key difference of CCF from CF is an extra compressing stage inserted in between the computing and forwarding stages of a relay, so as to reduce the forwarding information rate of the relay. The compressing stage at each relay consists of two operations: first to quantize the computed message combination on an appropriately chosen lattice (referred to as a quantization lattice), and then to take modulo on another lattice (referred to as a modulo lattice). We study the design of the quantization and modulo lattices and propose successive recovering algorithms to ensure the recoverability of source messages at destination. Based on that, we formulate a sum-rate maximization problem that is in general an NP-hard mixed integer program. A low-complexity algorithm is proposed to give a suboptimal solution. Numerical results are presented to demonstrate the superiority of CCF over the existing CF schemes. <s> BIB015 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> In this paper, we focus our attention on the cooperative uplink transmissions of systems beyond the LTE-Advanced initiative. We commence a unified treatment of the principle of single-carrier frequency-division multiple-access (FDMA) and the similarities and dissimilarities, advantages, and weakness of the localized FDMA, the interleaved FDMA, and the orthogonal FDMA systems are compared. Furthermore, the philosophy of both user cooperation and cooperative single-carrier FDMA is reviewed. They are investigated in the context of diverse topologies, transmission modes, resource allocation, and signal processing techniques applied at the relays. Benefits of relaying in LTE-Advanced are also reviewed. Our discussions demonstrate that these advanced techniques optimally exploit the resources in the context of cooperative single-carrier FDMA system, which is a promising enabler for various uplink transmission scenarios. <s> BIB016 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> In the current Intelligent Transportation Systems (ITS) traffic monitoring and incident detection are usually supported with mostly traditional and relatively slow reactivity technologies. In this paper we propose a new service, namely THOR (Traffic monitoring Hybrid ORiented service), able to combine two different wireless technologies and to provide real time information about vehicular traffic monitoring and incident detection. THOR relies on LTE (Long Term Evolution) and Dedicated Short Range Communication based VANETs (Vehicular ad-hoc NETworks) in a hybrid approach, which is compliant with ITS standards. This hybrid networking approach can be deployed today and can be ready for tomorrow VANET technology. We test THOR by simulations in a scenario with vehicle flows synthesized from real measured vehicular traffic traces. We provide an LTE load analysis and an assessment of incident detection capabilities. Our results are promising in terms of reactivity, precision and network traffic load sustainability. <s> BIB017 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> For relay terminals in wireless communication systems, the difference of the power consumed for relaying signals means unfairness, which may reduce the network lifetime when the system is energy-constrained. Classic opportunistic relay selections always cause unequal power consumption among all relays. In this study, the authors propose a novel distributed relay selection strategy, named the fair opportunistic relay selection (FORS) strategy, for amplify-and-forward (AF) opportunistic cooperative systems. The FORS strategy is designed based on physical-layer fairness that means all available relays cumulatively consume equal power. They use a set of weight coefficients to adjust the channel fading coefficients effectively and then change the selection probabilities for all relays on the basis of proportional fair scheduling. Considering that the ‘optimal’ relay can be selected proactively in quasi-static Rayleigh fading channels based on local channel state information, the overhead of the proposed scheme is small. Then, they analyse the performance of the FORS strategy and provide an exact analytical expression for the outage probability (P out) and the average symbol error probability. Numerical simulation results validate their analysis. The results show that the FORS strategy approximately achieves the upper bound of physical-layer fairness in the AF relaying system. <s> BIB018 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> In wireless distributed networks, multisource multirelay cooperative techniques can be used to exploit the spatial and temporal diversity gains to increase the performance or reduce the transmission energy consumption, which is very useful for intelligent transport system (ITS) networks. In this paper, we propose a power allocation method to optimize the hybrid decode–amplify–forward cooperative transmission for multisource multirelay ITS networks as a means to reduce the total power consumption while minimizing outage probability. Specifically, we derive closed-form outage probability expressions and present an energy-efficient relay selection method to form an optimal relay set. It is proven that the proposed methods can solve the joint power allocation and relay selection problem under outage probability constraint. Our performance analysis is supplemented by numerical simulation results to illustrate the significant energy savings of the proposed optimal power allocation and the relay selection methods. <s> BIB019 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> Cooperative communication (CC) has been introduced as an effective technique to combat the detrimental effects of channel fading by exploiting spatial diversity gain, resulting in improved throughput and network performance. CC provides an opportunity for single antenna nodes to share their resources and construct a virtual antenna array at a lower cost. As a result, CC is considered an efficient solution for mobile nodes where some difficulties in terms of physical size and energy consumption arise from implanting multiple antennas. However, since CC is a new technology it brings new challenges that should be adequately addressed to render it a viable solution for wireless communication. Parameters such as link reliability, energy efficiency, overall throughput, and network performance are all affected by cooperative transmission. Besides, nodes’ operation in physical layer should be coordinated with higher layers, especially with medium access control (MAC), for reliable operation in time varying channels. Accordingly, designing a cooperative MAC protocol that supports node coordination, error recovery and dynamic link optimization is important. In this paper, the most well-known cooperative MAC protocols are classified based on their channel access strategy into two groups: 1) contention-based and 2) contention-free schemes. At first, the preliminaries, constraints, and requirements for designing effective cooperative MAC protocols are illustrated. Then the current state-of-the-art cooperative MAC protocols are surveyed by benchmarking their scheduling schemes, characteristics, benefits, and drawbacks, in line with the suggested taxonomy. The cooperative MAC protocols are classified and analyzed based on their application and network utilized into five subsections, including vehicular networks, cognitive networks, multi-hop protocols, cross-layer protocols, and network coding-based protocols. Finally, challenges, open issues, and solutions are considered, which may be used in improving the available schemes or designing more reliable and effective cooperative MAC protocols in the future. <s> BIB020 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> Intermittently connected vehicular networks (ICVNs) consist of stationary roadside units (RSUs) deployed along the highway and mobile vehicles. ICVNs are generally infrastructure constrained with a long inter-RSU distance, leading to large dark areas and transmission outage. In this paper, we propose a novel cooperative store–carry–forward (CSCF) scheme to reduce the transmission outage time of vehicles in the dark areas. The CSCF scheme utilizes bidirectional vehicle streams and selects two vehicles in both directions to serve as relays successively for the target vehicle via inter-RSU cooperation. Compared with the existing schemes, simulation results demonstrate that the proposed CSCF scheme has a great advantage in reducing transmission outage time. <s> BIB021 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> The concept of cooperative communication appears as a beneficial method that can address key challenges faced by wireless networks. Cooperative techniques in IEEE 802.11 MAC protocols have thus received significant attention both in theoretical and practical aspects. In this survey article, we provide an overview of existing research on cooperative MAC protocols in the IEEE 802.11 standard. We specially focus on protocol’s behavior and propose a novel architectural model for cooperation. We present a classification of cooperative relay based MAC protocols, along model desired categories, and review representative cooperative protocols for 802.11. We further evaluate the operational issues of cooperative protocols in term of architecture, compatibility and complexity. <s> BIB022 </s> Cooperative Vehicular Networking: A Survey <s> I. INTRODUCTION <s> Cooperative intelligent transport system (C-ITS) is an emerging technology that enables secure and safe road travel using wireless communications. Vehicles regularly share their mobility information with the neighborhood road traffic and infrastructure-based road side units using cooperative awareness messages (CAMs) to develop a local dynamic map for safety applications. Since the applications provided by C-ITS are related to human safety, reliable as well as secure communications are required. To protect the vehicular network from untrusted data of malicious users that could cause network congestion and reduce vehicle safety, ITS standards have proposed various security procedures. In this paper, we analyze the interrelation between security, quality of service (QoS), and safety awareness of vehicles in C-ITS. To formulate an accurate measure of vehicle safety awareness, we first propose novel vehicle and infrastructure centric metrics that use the number of received CAMs, their accuracy, safety importance, and vehicle heading. We then implement the standard CAM signature and verification procedure in the ITS standard. Using simulation results and our proposed metrics, we show the impact of security signature and verification speed on the level of vehicle awareness and, hence, QoS in different road traffic conditions. <s> BIB023
W ITH the convergence of computers, vehicular infrastructure, communication, and automobiles technologies, research in the area of vehicular networks has reached new horizons in its development. These remarkable advancements have enabled researchers and engineers to predict the future of driverless cars that will be based not only on in-car sensors, but also on communication between vehicles. The experts at the Institute of Electrical and Electronic Engineers (IEEE) predict that autonomous cars will comprise 75% of total traffic on the road by the year 2040. BIB007 The emergence of such vehicles and their networks will impose new requirements for applications and services, such as safety messaging BIB007 , traffic monitoring BIB017 , lane changing BIB013 , and intersection management BIB014 . Some of the important challenges facing vehicular networking are due to the high-mobility nature of vehicular commutations, randomness in channel dynamics, and link interferences. In this context, researchers have shown interest in employing cooperative communications within vehicular networks to alleviate the impact of these challenges and improve reliability by enabling nodes to cooperate with each other. In cooperative networking, neighboring nodes can cooperate with each other by transmitting the overheard messages to Indeed, over the past few decades, researchers have extensively investigated the potential of cooperative communication in designing protocols that involve the physical (PHY), medium access control (MAC) and network layers. For example, PHY protocols employ different strategies for cooperation, such as amplify-and-forward BIB018 , compress-and-forward BIB015 , storeand-forward BIB021 , and decode-and-forward . The cooperation at the PHY layer imposes complex and manual requirements for operators and end-users BIB005 . This sparks a need to design intelligent cooperation functionality at the MAC layer to enable nodes to automatically manage the physical layer cooperation BIB022 . For instance, when a relay node is required to assist communication between transmitter and receiver, an exchange of extra control messages may be required for relay selection at the MAC layer BIB001 - BIB011 . In addition, routing protocols can further benefit from cooperation between the MAC and PHY layers in selecting a suitable path from source to destination - BIB004 . Also, a great deal of research has been carried out with respect to power allocation BIB019 , BIB002 , link scheduling BIB006 - BIB008 , and security BIB009 - BIB023 . These cooperative strategies highlight a few examples of the wide ranging research activities covering routing protocols, MAC protocols, traffic management, beaconing protocols, and mobility models, which are built upon a few decades of research progress in the general area of vehicular communications and cooperative networking. While this is first survey paper of its kind that primarily focuses on the cooperativeness in vehicular networks, there do exist a number of survey papers that cover cooperative networking BIB022 , BIB020 - BIB010 or vehicular communication BIB012 - in general. Figure 1 shows the related surveys classification and highlights the research gaps with respect to this survey. In addition, there are other survey papers that are mainly concerned with the physical layer aspects of cooperative communications. Interested readers are referred to BIB005 and BIB016 - BIB003 where physical layer cooperative communications are reviewed in detail. Considering the theme of our survey paper, we have selected a set of research articles, which address issues specific to cooperation among nodes in vehicular networks. The remainder of the survey is organized into six sections. Section II briefly discusses vehicular networks, cooperative communication in traditional wireless networks, and the concept of CVN. Section III presents recent advances in cooperative vehicular networks; and section further investigates the similarities and differences in recent research works in U.S. Government work not protected by U.S. copyright. the domain of CVN. A taxonomy of cooperative vehicular networks is derived from the literature and presented in Section IV. Section V discusses the key requirements that should be fulfilled to enable CVN. Section VI highlights the open research challenges in realizing the vision of CVN. Section VII concludes the paper.
Cooperative Vehicular Networking: A Survey <s> A. Vehicular Networks <s> Recently, vehicular communication has become an important application in wireless communications. The Long Term Evolution (LTE) system is considered as one candidate to realize vehicular communication due to many reasons, such as: 1. The LTE system provides extensive coverage. 2. The LTE system provides low transmission latency. 3. In Release 12, the LTE system supports ProSe device-todevice (D2D) service, which can be extended to support vehicular communication. <s> BIB001 </s> Cooperative Vehicular Networking: A Survey <s> A. Vehicular Networks <s> Vehicular localization plays an important role to enable safety and traffic flow control in intelligent transportation systems (ITS). Global positioning system (GPS) is generally used to localize a vehicle, but it is challenging to obtain the accurate location when the GPS signal is blocked from the satellites or corrupted by the multipath propagation. In this work, we present a GPS-less localization system using dedicated short range communication (DSRC). The primary objective is to accurately localize a vehicle without GPS. To this end, we propose a localization scheme using infrastructure-to-vehicle (I2V) and vehicle-to-vehicle (V2V) communications. <s> BIB002 </s> Cooperative Vehicular Networking: A Survey <s> A. Vehicular Networks <s> The IEEE 802.11p standard drafted to support wireless access in vehicular environments (WAVE) or vehicular ad hoc network (VANET) which uses Enhanced Distributed Channel Access (EDCA) mechanism for the contention-based prioritized Quality of Service (QoS) at the MAC layer. The EDCA mechanism defines four access categories (ACs). Each AC queue works as an independent DCF station (STA) with Enhanced Distributed Channel Access Function (EDCAF) to contend for Transmission Opportunities (TXOP) using its own EDCA parameters. This paper provides an analytical model to compute the performance of the IEEE 802.11p Enhanced Distributed Channel Access Function (EDCAF) for Vehicular Network. To develop the model the four access categories (ACs) and all the major factors that could affect the performance are considered. The relationship among the IEEE 802.11p EDCA parameters and performance matrices are derived through Markov chain based theoretical analysis. Moreover, the derived performance model is verified by simulation. <s> BIB003
Vehicular networks have emerged as a result of advancements in wireless technologies, ad-hoc networking, and the automobile industry. These networks are formed among moving vehicles, road side units (RSUs), and pedestrians that carry communication devices. Vehicular networks can be deployed in rural, urban, and highway environments. There are three main scenarios for vehicular communication: vehicleto-vehicle (V2V), vehicle-to-infrastructure (V2I), and vehicleto-pedestrian (V2P) BIB001 . The commonly used technologies are dedicated short-range communications (DSRC) BIB002 /IEEE 802.11p BIB003 , IEEE 1609 family of standards , and Long Term Evolution (LTE) . Some of the key technologies that shape the modern automobile industry and vehicular networks are described in and respectively. With the advancements in communication technologies, a number of promising applications are emerging for vehicular networks. These are mainly related to infotainment, active road safety, and traffic management. These applications impose
Cooperative Vehicular Networking: A Survey <s> B. Cooperative Communication <s> In this study, the authors present a novel design framework aimed at developing `cooperative diversity' in 802.11-based wireless sensor networks. The proposed scheme is a combination of a time-reversed space-time block code scheme at the physical layer and a cooperative routing protocol at the network layer. The core feature of this architecture is that the multiple routes are capable of assisting the transmission of each other, hence the reliability of `all' the wireless links are enhanced simultaneously by cooperative diversity. This will involve the design of physical layer transmission schemes, medium access protocols and routing strategies. For the latter in particular, the authors present a cooperative routing protocol that is capable of exploiting full transmit diversity in wireless sensor networks. The authors restrict ourselves by imposing as few modifications to existing schemes as possible, so that integration to the existing infrastructure will be straightforward. Comprehensive simulations have been carried out to demonstrate the end-to-end performance of the proposed scheme. It is shown that a substantial gain can be achieved by cooperative diversity using a virtual multiple input multiple output system architecture. <s> BIB001 </s> Cooperative Vehicular Networking: A Survey <s> B. Cooperative Communication <s> Cooperative beamforming is an efficient way to make use of the cooperative diversity in the cognitive radio networks. In this paper, we consider a multi-user cognitive radio network where multiple relay nodes are selected to perform cooperative beamforming. Aiming to satisfy the multi-Quality of Service (multi-QoS) requirements involving delay, bandwidth and BER, we propose a system transmission scheme, which achieves the satisfaction of bandwidth requirements and delay minimization by the dynamic channel allocation with queuing, meanwhile, a mean channel gain based multi-relay selection is proposed so as to guarantee the end-to-end BER as well as keep lower complexity. The novelty of these strategies is that they satisfy the multi-QoS requirements from the aspect of two-hop level. Simulation results indicate that the proposed scheme can achieve better performance, with all QoS requirements satisfied. <s> BIB002 </s> Cooperative Vehicular Networking: A Survey <s> B. Cooperative Communication <s> It can greatly improve the system performance by introducing cooperative communication into Cognitive Radio Networks (CRNs), but it has also increased the complexity of the CRNs. As is known to all, the average throughput of the CRNs will decrease as the number of relay nodes increases. In order to solve this problem, we used a selection amplify-and-forward scheme to relay. In order to maximize the system throughput of the CRNs, based on maximizing the received SNR, we proposed an optimal power allocation algorithm combined with best relay selection and cooperative communication. First, by best relay selection algorithm, we can find the best relay link. Then, an optimal power allocation algorithm was proposed to maximize the system capacity of the CRNs under the constraints of limited interference to the primary users and limited total transmission power of the secondary users. Using the convex optimization theory and Kuhn - Tucker conditions, we obtain closed-form expressions for solving the optimal solution. Finally, Mat lab based simulation experiments are used to verify the superiority of the proposed algorithms. It shows that these algorithms can effectively reduce power consumption, fully use space diversity gain, increase system capacity, and improve QoS and system reliability. <s> BIB003 </s> Cooperative Vehicular Networking: A Survey <s> B. Cooperative Communication <s> Incorporation of orthogonal frequency division multiplexing (OFDM) in cooperative cognitive radio network facilitates subcarrier sharing to achieve spatial diversity with opportunistic spectrum access. In addition, adaptive modulation has been adopted widely in wireless communication to improve spectral efficiency. Use of adaptive modulation for cooperative cognitive relaying transmission to maximize throughput under bit error rate (BER) constraint is an open issue. In this paper, we propose an adaptive subcarrier sharing scheme for OFDM-based cooperative cognitive radio system, wherein cognitive (secondary) system helps the primary system to achieve its target rate of communication in exchange for opportunistic spectrum sharing. Secondary transmitter uses adaptive mode of transmission to relay the primary signal with higher throughput while maintaining the BER constraint of primary system. At primary receiver, a BER-based selection combining scheme is employed to combine the signals received in two phases. Closed-form analytical expressions for BER and outage probability of primary and secondary system for a Rayleigh flat fading channel have been derived. Results show that the outage probability with the proposed scheme (for dissimilar modulation) outperforms direct transmission and conventional maximal ratio combining scheme (for similar modulation). <s> BIB004 </s> Cooperative Vehicular Networking: A Survey <s> B. Cooperative Communication <s> A Cooperative communication technique has gained considerable attention in the recent time to improve the quality of service (QoS) of ad hoc networks. Cooperative communication significantly improves link capacity through physical layer technique, and spatial diversity gain is achieved by using neighboring nodes to retransmit the overheard information to the intended destination node. However, upper layer protocols are not elegantly designed to adequately exploit the spatial diversity to improve overall network performance in ad hoc networks. Limiting the cooperation to one network layer may not be the best solution. Thus, in this paper, we intend to achieve multilayer functionality from physical layer to the routing layer to provide cooperative communication. An adaptive cross layered cooperative routing algorithm (ACCR) is proposed to analyze the channel state variations and selectively choose the cooperative MAC scheme on demand by exploiting spatial diversity. The algorithm dynamically selects best relay candidates based on QoS metric, contention delay and node energy fairness. Network layer, then chooses an optimized path from source to destination through the selected relay nodes. We validate the algorithm with extensive simulations. The results clearly show that cooperative cross-layer design approach effectively improves the average throughput and average delay for each packet transmission. <s> BIB005
Cooperative communication is an emerging technology that is capable of enabling efficient spectrum use by exploiting the wireless broadcast advantage of overhearing the signal transmitted from a source to a destination. According to the definition presented in "cooperative communication refers to the processing of over-heard information at the surrounding nodes and retransmission towards the destination to create spatial diversity." More precisely, cooperative communication can assist in achieving a higher spatial diversity BIB001 , lower transmission delay BIB002 , higher throughput BIB004 , adaptability to network conditions BIB005 , and reduced interference BIB003 . Considering these features, cooperative communication technology can play an important role in improving the overall performance of vehicular networks.
Cooperative Vehicular Networking: A Survey <s> C. Cooperative Vehicular Networking (CVN) <s> Vehicular communication technologies are realized recently as the staples of modern societies. When cooperative approach is employed in vehicular communication system, it will perform more effective in avoiding accidents and traffic congestions. In vehicular networks, cooperative Multi-Input Multi-Output (MIMO) and cooperative relay techniques increase the performance as well as reduce the transmission energy consumption by exploiting the spatial and temporal diversity gain. The energy efficiency characteristics of cooperative techniques in vehicular networks is of prime importance as the energy consumption of wireless nodes embedded on road infrastructure is constrained. In this paper, applications of cooperative communication techniques in vehicular networks are proposed. Also we compare the performance and the energy consumption of cooperative techniques with the traditional multi-hop technique over Rayleigh channel by considering MQAM as design example. The optimal cooperative strategy for energy constrained road infrastructure networks in ITS application can be selected using this performance optimization in cooperative communication. <s> BIB001 </s> Cooperative Vehicular Networking: A Survey <s> C. Cooperative Vehicular Networking (CVN) <s> The cooperative transmission is an effective approach for vehicular communications to improve the wireless transmission capacity and reliability in the fifth generation (5G) small cell networks. Based on distances between the vehicle and cooperative small cell BSs, the cooperative probability and the coverage probability have been derived for 5G cooperative small cell networks where small cell base stations (BSs) follow Poisson point process distributions. Furthermore, the vehicular handoff rate and the vehicular overhead ratio have been proposed to evaluate the vehicular mobility performance in 5G cooperative small cell networks. To balance the vehicular communication capacity and the vehicular handoff ratio, an optimal vehicular overhead ratio can be achieved by adjusting the cooperative threshold of 5G cooperative small cell networks. <s> BIB002 </s> Cooperative Vehicular Networking: A Survey <s> C. Cooperative Vehicular Networking (CVN) <s> This paper investigates information services in vehicular networks via cooperative infrastructure-to-vehicle (I2V), vehicle-to-vehicle (V2V) communications. In particular, we consider the cooperation among multiple roadside units (RSUs) in a bidirectional roadway scenario in providing data services. The primary objective is to best explore the channel efficiency for I2V/V2V communications to maximize the system performance. Specifically, we formulate the problem, propose a Maximum Service (MS) algorithm which combines the following three approaches. First, a hybrid I2V/V2V data dissemination scheduling policy is proposed to enable data services in the RSU's coverage. Second, a cooperative V2V data sharing mechanism out of the RSUs' coverage is proposed by assigning server-vehicles (SVs) to offload RSUs' workload. Third, a data dissemination policy for SVs is proposed to further enhance overall system performance. Finally, we build the simulation model, give a comprehensive performance evaluation to demonstrate the superiority of the proposed solution. <s> BIB003
Similar to other wireless networks, cooperative communication in vehicular networks has also been leveraged to offer various improvements; namely, spectral efficiency, increased transmission reliability, and reduced transmission delay BIB002 , BIB003 . CVN enables neighboring vehicles to cooperate with each other by sharing information at different layers of the network so that it has multiple transmission alternatives for robust communication. Vehicles can cooperate with each other either directly or through a roadside infrastructure. Usually, the vehicular node which helps the sender node to transmit its data is called a helper node or relay node. Please note that, for the sake of consistency, we use the term "relay node" instead of "helper node" throughout this paper. The relay node can operate in different transmission modes such as amplify-and-forward, decode-and-forward, compressand-forward, and store-carry-and-forward. A summary of various strategies for cooperative communication in vehicular networks is presented in BIB001 . Figure 2 shows a simple illustration of CVN where cooperation is performed in different ways. For example, a vehicle can provide assistance to other vehicles with failed direct transmissions, as illustrated in Figure 2a . Similarly, a vehicle can assist a RSU in relaying its packets to other vehicles, which are out of the RSU transmission range (Figure 2b) . Figure 2c shows a scenario where both RSU and vehicle node, are involved in relaying failed packet transmission. For instance, when a source RSU fails to successfully transmit a packet to the targeted destination, it forwards the failed Fig. 3 . Illustrations of cooperative diversity . packet to the next RSU along the path using the backhaul wired connection. The new RSU relays the received packet to a vehicle, moving towards the targeted destination, that carries and transmits the relayed packet when it is in transmission range of the targeted destination.
Cooperative Vehicular Networking: A Survey <s> A. Physical Layer Cooperation in CVN <s> In wireless distributed networks, cooperative relay and cooperative multiple-input-multiple-output (MIMO) techniques can be used to exploit the spatial and temporal diversity gains to increase the performance or reduce the transmission energy consumption. The energy efficiency of cooperative MIMO and relay techniques is then very useful for the infrastructure-to-vehicle (I2V) and infrastructure-to-infrastructure (I2I) communications in intelligent transport system (ITS) networks, where the energy consumption of wireless nodes embedded on road infrastructure is constrained. In this paper, applications of cooperation between nodes to ITS networks are proposed, and the performance and the energy consumption of cooperative relay and cooperative MIMO are investigated and compared with the traditional multihop technique. The comparison between these cooperative techniques helps us choose the optimal cooperative strategy in terms of energy consumption for energy-constrained road infrastructure networks in ITS applications. <s> BIB001 </s> Cooperative Vehicular Networking: A Survey <s> A. Physical Layer Cooperation in CVN <s> The articles in this special issue focus on the technology and applications supported by virtual multiple antennas, or VMIMOs. The impetus for this has been spurred by the strong desire to understand VMIMO, which is a rapidly growing research area. VMIMO is believed to be a key technology for beyond 4th generation mobile communications technologies (B4G). It enables one to make use of all the neighboring terminals and amortize the cost of multiple antennas; hence, a large MIMO channel can be created to increase capacity significantly as well as improve error rate performance. Nevertheless, fundamental roadblocks need to be addressed in order to take full advantage of VMIMO. <s> BIB002 </s> Cooperative Vehicular Networking: A Survey <s> A. Physical Layer Cooperation in CVN <s> Cooperative communication has been recently applied to vehicular networks to enable coverage extension and enhance link reliability through distributed spatial diversity. In this paper, we investigate the performance of cooperative vehicular relaying over a doubly-selective (i.e., frequency-selective and time-selective) fading channel for an LTE-Advanced downlink session. Using Amplify-and-Forward (AF) relaying with orthogonal cooperation protocol and Multiple-Input Multiple-Output (MIMO) deployment at the source and destination, we derive a pairwise error probability (PEP) expression and demonstrate the achievable diversity gains. Space-Time Block Coding (STBC) is used to ensure the orthogonality of the transmitted-received signals. Our results demonstrate that, via proper linear precoding constellation, the proposed cooperative-MIMO vehicular relaying is capable of extracting the maximum available diversity in frequency (through multipath diversity), time (through Doppler diversity) and space (through cooperative diversity as well as the MIMO deployment) dimensions. We further conduct numerical simulations to confirm the analytical derivations and present the error rate performance of the cooperative relaying vehicular scheme under consideration. <s> BIB003 </s> Cooperative Vehicular Networking: A Survey <s> A. Physical Layer Cooperation in CVN <s> Recent years have seen a lot of work in moving distributed MIMO from theory to practice. While this prior work demonstrates the feasibility of synchronizing multiple transmitters in time, frequency, and phase, none of them deliver a full-fledged PHY capable of supporting distributed MIMO in real-time. Further, none of them can address dynamic environments or mobile clients. Addressing these challenges, requires new solutions for low-overhead and fast tracking of wireless channels, which are the key parameters of any distributed MIMO system. It also requires a software-hardware architecture that can deliver a distributed MIMO within a full-fledged 802.11 PHY, while still meeting the tight timing constraints of the 802.11 protocol. This architecture also needs to perform coordinated power control across distributed MIMO nodes, as opposed to simply letting each node perform power control as if it were operating alone. This paper describes the design and implementation of MegaMIMO 2.0, a system that achieves these goals and delivers the first real-time fully distributed 802.11 MIMO system. <s> BIB004
In wireless networks, exploiting spatial diversity is one of the mechanisms for enhancing the reliability of a message by transmitting it through two or more different communication channels. Spatial diversity is achieved by using multiple antennas of both transmitter and receiver. Conventional MIMO systems are an example of achieving the spatial diversity using multiple antennas . In some cases, it is infeasible or costly to achieve spatial diversity by employing multiple antennas. In such scenarios, spatial diversity is achieved by enabling cooperation among multiple nodes to obtain similar benefits as achieved by conventional MIMO systems. Such spatial diversity is called cooperative diversity. Figure 3 provides an illustration of cooperative diversity. One example of cooperative diversity is a cooperative MIMO (also known as distributed BIB004 or virtual MIMO BIB002 ). The performance of cooperative vehicular relaying is analyzed by Feteiha and Hassanein BIB003 in LTE-Advanced MIMO downlink channels for coded transmission. The data transmission considered in the analysis is involved into two main phases: broadcasting phase and relaying phase. Each phase is further divided into two levels. During first level of broadcasting phase, source node sends two precoded blocks from two different antennas. Another version of precoded blocks is transmitted during the second level of broadcasting phase from previously used antennas. Similarly, the relaying phase is also divided into levels. In each level, the relay first amplifies the received signal and then transmits the resultant signal to the destination. To investigate the achievable diversity gain in these phases pairwise error probability expressions are derived. The investigation reveals that the significant diversity gain is achieved through MIMO deployment and encoded transmission. In another work, Nguyen et al. BIB001 proposed cooperative strategies to enable the energy-efficient transmission in I2V and I2I communication scenarios. These cooperative strategies rely on cooperative relay, multihop, and cooperative MIMO techniques. The cooperative relay and cooperative MIMO techniques are more energy efficient than the multihop techniques. Further, for a given transmission distance, an optimal cooperative MIMO scheme selection is proposed to select the optimal antenna configurations.
Cooperative Vehicular Networking: A Survey <s> B. MAC Protocols for CVN <s> Due to the rapid advancement in the wireless communication technology and automotive industries, the paradigm of vehicular ad-hoc networks (VANETs) emerges as a promising approach to provide road safety, vehicle traffic management, and infotainment applications. Cooperative communication, on the other hand, can enhance the reliability of communication links in VANETs, thus mitigating wireless channel impairments due to the user mobility. In this paper, we present a cooperative scheme for medium access control (MAC) in VANETs, referred to as Cooperative ADHOC MAC (CAH-MAC). In CAH-MAC, neighboring nodes cooperate by utilizing unreserved time slots, for retransmission of a packet which failed to reach the target receiver due to a poor channel condition. Through mathematical analysis and simulation, we show that our scheme increases the probability of successful packet transmission and hence the network throughput in various networking scenarios. <s> BIB001 </s> Cooperative Vehicular Networking: A Survey <s> B. MAC Protocols for CVN <s> Cooperative medium access control (MAC) protocols have been proposed for improving communication reliability and throughput in wireless networks. In a recent study, a cooperative MAC scheme called Cooperative ADHOC MAC (CAH-MAC) has been proposed to increase the network throughput by reducing the wastage of time slots under a static network scenario. Particularly, neighbor nodes cooperate to increase the transmission reliability by utilizing unreserved time slots for retransmission of failed packets. In this paper, we focus on a mobile networking scenario and study the effects of time slot reservation on the performance of CAH-MAC under highly dynamic vehicular environments. We find out that the introduction of time slot reservation results in cooperation collisions, degrading the system performance. To tackle this challenge, we present an enhanced CAH-MAC (eCAH-MAC) that is able to avoid cooperation collisions and thus efficiently utilize a time slot. In eCAH-MAC, the cooperative relay transmission phase is delayed, so that cooperation collisions can be avoided and time slots can be efficiently reserved. Through extensive simulations, we demonstrate that eCAH-MAC uses time slot more efficiently than CAH-MAC in direct and/or cooperative transmissions and in reserving time slots in the presence of relative mobility among nearby nodes. <s> BIB002 </s> Cooperative Vehicular Networking: A Survey <s> B. MAC Protocols for CVN <s> Owing to the advancement of wireless communication technologies, the vehicular ad-hoc network (VANET) has experienced a rapid development in recent years. However, it is challenging to design a reliable and efficient medium access control (MAC) protocol for safety messages with strict quality of service demands, owing to unreliable wireless links and frequent changes of topology. On the other hand, cooperative communication can enhance the reliability of wireless links by exploiting the spatial diversity. The authors present here a cooperative clustering-based MAC (CCB-MAC) protocol for VANETs, in order to improve the transmission reliability of safety messages. In CCB-MAC, the selected helpers relay the safety message to the nodes that have failed in reception during the broadcast period. In addition, cooperation is conducted in idle slots, without interrupting the normal transmission. Both mathematical analysis and numerical results demonstrate that CCB-MAC increases the successful reception rate of safety messages significantly. <s> BIB003 </s> Cooperative Vehicular Networking: A Survey <s> B. MAC Protocols for CVN <s> In a VANET (Vehicular Ad Hoc Network), the collisions caused by vehicles' mobility lead to poor network performance, especially in high-density network. In order to increase the flexibility of communication, this paper presents a communication protocol with dynamic choosing relay node and automatic cooperative communication called VC-TDMA (Vehicular Cooperative TDMA). The nodes can choose the multi-hop relay nodes rationally and other idle nodes can provide cooperative communication automatically. Considering batch arrival data traffic, the buffer queue state of nodes in the network is numerical analyzed by Markov Chain. The simulation results show that VC-TDMA can improve the throughput of the network compared with the conventional TDMA protocol. <s> BIB004 </s> Cooperative Vehicular Networking: A Survey <s> B. MAC Protocols for CVN <s> Cooperative medium access control (MAC) protocols have been proposed for improving communication reliability and throughput in wireless networks. In our previous work, a cooperative MAC scheme called Cooperative ADHOC MAC (CAH-MAC) has been proposed to increase the network throughput under a static networking scenario for vehicular communications. In this paper, we study the effects of relative mobility among nodes and channel fading on the performance of CAH-MAC. In a dynamic networking environment, system performance degrades due to cooperation collisions. To tackle this challenge, we present an enhanced CAH-MAC (eCAH-MAC) scheme, which avoids cooperation collisions and efficiently utilizes cooperation opportunities without disrupting the time-slot reservation operations. Through mathematical analysis and computer simulations, we show that eCAH-MAC increases the effectiveness of node cooperation by increasing utilization of an unreserved time slot. Furthermore, we perform extensive simulations for realistic networking scenarios to investigate the probability of successful cooperative relay transmission and usage of unreserved time slots in eCAH-MAC, in comparison with existing approaches. <s> BIB005
Similar to traditional wireless networks, the design of the MAC layer protocols in vehicular networks is also vital for improving network performance. Generally, MAC layer protocols can be divided into three major categories: contentionfree, contention-based, and hybrid. Contention-free MAC approaches utilize Time Division Multiple Access (TDMA) and synchronization, whereas contention-based approaches rely on backoff mechanisms. Hybrid MAC protocols combine the advantages of both contention-free and contention-based MAC protocols. We discuss research works that focus on cooperativeness at MAC layer of CVN. 1) Contention-Free Cooperative MAC Protocols: Contention-free MAC protocols rely on a scheduler to regulate participants by defining which nodes may use the channel and at what time. TDMA is a contention-free channel access mechanism that divides time into multiple slots. These time slots are assigned to vehicular nodes for communication. The number of time slots assigned to a node depends on the A cooperative ad-hoc MAC (CAH-MAC) for VANET is proposed by Bharati and Zhuang BIB001 that is based on distributed TDMA. Cooperation is offered by a relay node only if the following conditions are satisfied: a) the direct transmission fails, b) the relay node receives the packet, c) the destination is reachable from the relay, and d) a time slot is available. If there are multiple potential relay nodes, the one that first announces to relay the packet will become the relay, while the remaining nodes will not participate. Bear in mind that cooperation is performed by a relay node during an unused time slot to relay the packet for which direct transmission failed. Therefore, the cooperation does not affect regular communication. The use of unused time slots for cooperative transmission by the relay ameliorates throughput the VANET. However, CAH-MAC is suitable for a scenario where the relative mobility is negligible; otherwise, the protocol faces slot reservation collision. Even in the case of no collision, relay nodes consume available unreserved time slots for cooperative transmission, which lessens the opportunities of other nodes to find an unreserved time slot. The impact of time slot reservation for cooperative transmission on the performance of the CAH-MAC is investigated by Bharati et al. BIB002 . They observed that reservation of a time slot leads to cooperation collisions that degrade network performance. To deal with the issue of reservation slot collision, the authors further extend the CAH-MAC protocol and propose an enhanced version, the eCAH-MAC protocol BIB005 . In eCAH-MAC, a relay node suspends cooperative transmission to avoid reservation slot collisions if any of the one-hop neighbors of the relay node and/or destination node attempts to transmit. The relay node performs cooperative transmission if no possible communication is detected in its one-hop neighborhood and that of the destination. Although the proposed collision avoidance scheme in eCAH-MAC enhances unreserved slot utilization, switching between the sending and receiving mode on both nodes (relay and destination) is required within a time slot that intensifies system complexity. A cooperative clustering-based MAC (CCB-MAC) protocol is proposed in BIB003 to improve safety broadcast message reliability in VANETs. In CCB-MAC, cluster formation is mainly involved in the joining process, cluster-head election process, leaving process, and cluster merging process. The entire process of cooperation includes three key tasks; transmission failure identification, appropriate relay selection, and collision avoidance with other potential relays and packet retransmissions. To offer a reliable broadcast service, CCB-MAC introduces an ACK message that cluster members (destination nodes) send back to the cluster head on successful reception of a broadcast message. If the ACK message is not received by the neighboring nodes of a destination, they will consider it an unsuccessful transmission for the destination, and themselves as potential relays. To avoid possible collision, the cluster head assigns a time slot to each potential relay node for transmission. When one relay transmits the failed packet to the destination node, other relay nodes suspend transmission of the packet after overhearing the transmission. Although the proposed MAC enhances the successful reception rate of the safety messages, the exchange of ACK message against each broadcast message puts significant communication overhead on the CCB-MAC protocol and increases the interference. The CCB-MAC also does not consider node mobility which is a critical parameter for vehicular networks. This causes huge overhead as a result of frequent cluster head selection. The above-mentioned TDMA-MAC protocols require idle slots to offer cooperative communication; however, in the dense VANETs, a sufficient number of idle slots may not be available for cooperation. A vehicular cooperative TDMA-based (VC-TDMA) MAC protocol is proposed by Zhang and Zhu BIB004 , which opportunistically exploits the reserved time slots of a cooperative node to improve throughput. Usually, VANET communication has to rely on multi-hop relays if the distance between the source and destination is larger than a one hop transmission range. However, selection of a relay node is critical because of the vehicles mobility. If the selected relay node has a longer buffer of packets ahead of the packet that needs to be relayed, then the destination may go out of the relay node transmission range, while waiting for transmission. In this case, the authors suggested to use a neighbor of the relay node as a cooperative node to forward the packet if its own buffer is empty. When the relay node receives a packet from the cooperative neighbor, it deletes the packet from its buffer. The cooperative node offers cooperation to a relay only considering its own empty buffer without considering channel conditions. The VC-TDMA MAC may not provide a significant advantage in cooperation with varying channel conditions and node speed. Although contention-free MAC protocols provide deterministic delay, time synchronization is required for each participant. The time slots are reserved for the nodes and channel can be accessed without any contention. However, the scheme usually suffers from dynamic transmission delay in dense networks and topology changes. Scalability, non-periodic data, and assigning time slots to nodes with diverse data rates are some of the others main concerns in implementing contentionfree MAC protocols.