reference
stringlengths
141
444k
target
stringlengths
31
68k
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> INTRODUCTION <s> The complete instruction-by-instruction simulation of one computer system on a different system is a well-known computing technique. It is often used for software development when a hardware base is being altered. For example, if a programmer is developing software for some new special purpose (e.g., aerospace) computer X which is under construction and as yet unavailable, he will likely begin by writing a simulator for that computer on some available general-purpose machine G. The simulator will provide a detailed simulation of the special-purpose environment X, including its processor, memory, and I/O devices. Except for possible timing dependencies, programs which run on the “simulated machine X” can later run on the “real machine X” (when it is finally built and checked out) with identical effect. The programs running on X can be arbitrary — including code to exercise simulated I/O devices, move data and instructions anywhere in simulated memory, or execute any instruction of the simulated machine. The simulator provides a layer of software filtering which protects the resources of the machine G from being misused by programs on X. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> INTRODUCTION <s> Current cloud computing infrastructure typically assumes a homogeneous collection of commodity hardware, with details about hardware variation intentionally hidden from users. In this paper, we present our approach for extending the traditional notions of cloud computing to provide a cloud-based access model to clusters that contain a heterogeneous architectures and accelerators. We describe our ongoing work extending the Open Stack cloud computing stack to support heterogeneous architectures and accelerators, and our experiences running Open Stack on our local heterogeneous cluster test bed. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> INTRODUCTION <s> Cloud and heterogeneous computing solutions exist today for the emerging big data problems in biology <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> INTRODUCTION <s> Data analytics are key applications running in the cloud computing environment. To improve performance and cost-effectiveness of a data analytics cluster in the cloud, the data analytics system should account for heterogeneity of the environment and workloads. In addition, it also needs to provide fairness among jobs when multiple jobs share the cluster. In this paper, we rethink resource allocation and job scheduling on a data analytics system in the cloud to embrace the heterogeneity of the underlying platforms and workloads. To that end, we suggest an architecture to allocate resources to a data analytics cluster in the cloud, and propose a metric of share in a heterogeneous cluster to realize a scheduling scheme that achieves high performance and fairness. <s> BIB004
Since the early 2000s, high-performance computing (HPC) programmers and researchers have adopted a new computing paradigm that combines two architectures: namely multi-core processors with powerful and general-purpose cores and many-core accelerators, the leading example of which is graphics processing units (GPUs), with a massive number of simple cores that accelerate algorithms with a high degree of data parallelism. Despite an increasing number of cores, multi-core processor designs still aim at reducing latency in sequential programs by using sophisticated control logic and large cache memories. Conversely, GPUs seek to boost the execution throughput of parallel applications with thousands of simple cores and a high memory bandwidth This work is supported by the European Commission under the Horizon 2020 program (RAPID project H2020-ICT-644312). Authors' addresses: C.-H. Hong, I. Spence, and D. S. Nikolopoulos, Data Science and Scalable Computing, School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, University Road, Belfast BT7 1NN, Northern Ireland, United Kingdom; emails: {c.hong, i.spence, d.nikolopoulos}@qub.ac.uk. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or [email protected]. c 2017 ACM 0360-0300/2017/06-ART35 $15.00 DOI: http://dx.doi.org/10.1145/3068281 architecture. Heterogeneous systems combining multi-core processors and GPUs can meet the diverse requirements of a wide range of high-performance computing applications with both control-intensive components and highly data-parallel components. The success of heterogeneous computing systems with GPUs is evident in the latest Top500 list , where more than 19% of supercomputers adopt both CPUs and GPUs. Cloud computing platforms can leverage heterogeneous compute nodes to reduce the total cost of ownership and achieve higher performance and energy efficiency BIB002 BIB003 . A cloud with heterogeneous compute nodes would allow users to deploy computationally intensive applications without the need to acquire and maintain large-scale clusters. In addition to this benefit, heterogeneous computing can offer better performance within the same power budget compared to systems based on homogeneous processors, as computational tasks can be placed on either conventional processors or GPUs depending on the degree of parallelism. These combined benefits have been motivating cloud service providers to equip their offerings with GPUs and heterogeneous programming environments BIB004 . A number of HPC applications can benefit from execution on heterogeneous cloud environments. These include particle simulation ], molecular dynamics simulation [Glaser et al. 2015] , and computational finance , as well as two-dimensional (2D) and 3D graphics acceleration workloads, which exhibit high efficiency when exploiting GPUs. System virtualization is a key enabling technology for the Cloud. The virtualization software creates an elastic virtual computing environment, which is essential for improving resource utilization and reducing cost of ownership. Virtualization systems are invariably underpinned by methods of multiplexing system resources. Most of the system resources including processors and peripheral devices can be completely virtualized nowadays and there is ample research in this field dating from the early 1960s BIB001 ]. However, virtualizing GPUs is a relatively new area of study and remains a challenging endeavor. A key barrier to this has been the implementations of GPU drivers, which are not open for modification due to intellectual property protection reasons. Furthermore, GPU architectures are not standardized, and GPU vendors have been offering architectures with vastly different levels of support for virtualization. For these reasons, conventional virtualization techniques are not directly applicable to virtualizing GPUs.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> GPU Architecture <s> Programming Massively Parallel Processors. A Hands-on Approach David Kirk and Wen-mei Hwu ISBN: 978-0-12-381472-2 Copyright 2010 Introduction This book is designed for graduate/undergraduate students and practitioners from any science and engineering discipline who use computational power to further their field of research. This comprehensive test/reference provides a foundation for the understanding and implementation of parallel programming skills which are needed to achieve breakthrough results by developing parallel applications that perform well on certain classes of Graphic Processor Units (GPUs). The book guides the reader to experience programming by using an extension to C language, in CUDA which is a parallel programming environment supported on NVIDIA GPUs, and emulated on less parallel CPUs. Given the fact that parallel programming on any High Performance Computer is complex and requires knowledge about the underlying hardware in order to write an efficient program, it becomes an advantage of this book over others to be specific toward a particular hardware. The book takes the readers through a series of techniques for writing and optimizing parallel programming for several real-world applications. Such experience opens the door for the reader to learn parallel programming in depth. Outline of the Book Kirk and Hwu effectively organize and link a wide spectrum of parallel programming concepts by focusing on the practical applications in contrast to most general parallel programming texts that are mostly conceptual and theoretical. The authors are both affiliated with NVIDIA; Kirk is an NVIDIA Fellow and Hwu is principle investigator for the first NVIDIA CUDA Center of Excellence at the University of Illinois at Urbana-Champaign. Their coverage in the book can be divided into four sections. The first part (Chapters 1–3) starts by defining GPUs and their modern architectures and later providing a history of Graphics Pipelines and GPU computing. It also covers data parallelism, the basics of CUDA memory/threading models, the CUDA extensions to the C language, and the basic programming/debugging tools. The second part (Chapters 4–7) enhances student programming skills by explaining the CUDA memory model and its types, strategies for reducing global memory traffic, the CUDA threading model and granularity which include thread scheduling and basic latency hiding techniques, GPU hardware performance features, techniques to hide latency in memory accesses, floating point arithmetic, modern computer system architecture, and the common data-parallel programming patterns needed to develop a high-performance parallel application. The third part (Chapters 8–11) provides a broad range of parallel execution models and parallel programming principles, in addition to a brief introduction to OpenCL. They also include a wide range of application case studies, such as advanced MRI reconstruction, molecular visualization and analysis. The last chapter (Chapter 12) discusses the great potential for future architectures of GPUs. It provides commentary on the evolution of memory architecture, Kernel Execution Control Evolution, and programming environments. Summary In general, this book is well-written and well-organized. A lot of difficult concepts related to parallel computing areas are easily explained, from which beginners or even advanced parallel programmers will benefit greatly. It provides a good starting point for beginning parallel programmers who can access a Tesla GPU. The book targets specific hardware and evaluates performance based on this specific hardware. As mentioned in this book, approximately 200 million CUDA-capable GPUs have been actively in use. Therefore, the chances are that a lot of beginning parallel programmers can have access to Telsa GPU. Also, this book gives clear descriptions of Tesla GPU architecture, which lays a solid foundation for both beginning parallel programmers and experienced parallel programmers. The book can also serve as a good reference book for advanced parallel computing courses. Jie Cheng, University of Hawaii Hilo <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> GPU Architecture <s> Haswell, Intel's fourth-generation core processor architecture, delivers a range of client parts, a converged core for the client and server, and technologies used across many products. It uses an optimized version of Intel 22-nm process technology. Haswell provides enhancements in power-performance efficiency, power management, form factor and cost, core and uncore microarchitecture, and the core's instruction set. <s> BIB002
GPUs adopt a fundamentally different design for executing parallel applications compared to conventional multi-core processors BIB001 . GPUs are based on a throughput-oriented design and offer thousands of simple cores and a high bandwidth memory architecture. This design enables maximizing the execution throughput of applications with a high degree of data parallelism, which are expected to be decomposable into a large number of threads operating on different points in the program data space. In this design, when some threads are waiting for the completion of arithmetic operations or memory accesses with long latency, other threads can be scheduled by the hardware scheduler to hide the latency ]. This mechanism may lengthen the respective execution time of individual threads but improve total execution throughput. On the contrary, the design of conventional processors is optimized for reducing the execution time of sequential code on each core, thus adding complexity to each core at the cost of offering fewer cores in the processor package. Conventional processors typically use sophisticated control logic and large cache memories to efficiently deal with conditional branches, pipeline stalls, and poor data locality. Modern GPUs also handle complex control flows, have large SRAM-based local memories, and adopt some additional features of conventional processors but preserve the fundamental properties of offering a higher degree of thread-level parallelism and higher memory bandwidth. Figure 1 shows the architecture of a traditional heterogeneous system equipping a discrete GPU. The GPU part is based on the Fermi architecture of NVIDIA but is not limited to NVIDIA architectures, as recent GPUs adopt a similar high-level design. A GPU has several streaming multiprocessors (SMs), each of which has 32 computing cores. Each SM also has an L1 data cache and a low latency shared memory. Each core has local registers, an integer arithmetic logic unit (ALU), a floating point unit (FPU), and several special function units (SFUs) that execute transcendental instructions such as sine and cosine operations. A GPU memory management unit (MMU) provides virtual address spaces for GPU applications. A GPU memory reference by an application is resolved into a physical address by the MMU using the application's own page table. Memory accesses from each application therefore cannot refer to other applications' address spaces. The host connects the discrete GPU using the PCI Express (PCIe) interface. The CPU in the host interacts with the GPU via memory mapped input/output (MMIO). The GPU registers and device memory can be accessed by the CPU through the MMIO interface. The MMIO region is configured at boot time based on the PCI base address registers (BARs), which are memory windows that can be used by the host for communication. GPU operations issued by an application are submitted into a ring buffer associated with the application's command submission channel, which is a GPU hardware unit and visible to the CPU via MMIO. Large data can be transferred between the host memory and the GPU device memory by the direct memory access (DMA) engine. The discrete GPU architecture shown in Figure 1 can cause data transfer overhead over the PCIe interface, because the maximum bandwidth that current PCIe can offer is low (i.e., 16GB/s) compared to the internal memory bandwidth of the GPU (i.e., hundreds of GB/s). Furthermore, the architecture incurs large programming effort to manage data manipulated by both the CPU and the GPU. To address these issues, GPUs have been integrated into the CPU chip. Intel's GPU architecture BIB002 ] and AMD's HSA architecture ] integrate the two processors on the same bus with shared system memory. These architectures enable a unified virtual address space and eliminate data copying between the devices. They can also reduce a programmer's burden to manage the separate data address spaces.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> GPU Applications <s> This paper presents and characterizes Rodinia, a benchmark suite for heterogeneous computing. To help architects study emerging platforms such as GPUs (Graphics Processing Units), Rodinia includes applications and kernels which target multi-core CPU and GPU platforms. The choice of applications is inspired by Berkeley's dwarf taxonomy. Our characterization shows that the Rodinia benchmarks cover a wide range of parallel communication patterns, synchronization techniques and power consumption, and has led to some important architectural insight, such as the growing importance of memory-bandwidth limitations and the consequent importance of data layout. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> GPU Applications <s> Scalable heterogeneous computing systems, which are composed of a mix of compute devices, such as commodity multicore processors, graphics processors, reconfigurable processors, and others, are gaining attention as one approach to continuing performance improvement while managing the new challenge of energy efficiency. As these systems become more common, it is important to be able to compare and contrast architectural designs and programming systems in a fair and open forum. To this end, we have designed the Scalable HeterOgeneous Computing benchmark suite (SHOC). SHOC's initial focus is on systems containing graphics processing units (GPUs) and multi-core processors, and on the new OpenCL programming standard. SHOC is a spectrum of programs that test the performance and stability of these scalable heterogeneous computing systems. At the lowest level, SHOC uses microbenchmarks to assess architectural features of the system. At higher levels, SHOC uses application kernels to determine system-wide performance including many system features such as intranode and internode communication among devices. SHOC includes benchmark implementations in both OpenCL and CUDA in order to provide a comparison of these programming models. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> GPU Applications <s> The Parboil benchmarks are a set of throughput computing applications useful for studying the performance of throughput computing architecture and compilers. The name comes from the culinary term for a partial cooking process, which represents our belief that useful throughput computing benchmarks must be “cooked”, or preselected to implement a scalable algorithm with fine-grained paralle l tasks. But useful benchmarks for this field cannot be “fully cooked”, because the architectures and programming models and supporting tools are evolving rapidly enough that static benchmark codes will lose relevance very quickly. We have collected benchmarks from throughput computing application researchers in many different scientific and commercial fields including image processing, biomolec ular simulation, fluid dynamics, and astronomy. Each benchmark includes several implementations. Some implementations we provide as readable base implementations from which new optimization efforts can begin, and others as examples of the current state-of-the-art targeting specific CPU and GPU architectures. As we continue to optimiz e these benchmarks for new and existing architectures ourselves, we will also gladly accept new implementations and benchmark contributions from developers to recognize those at the frontier of performance optimization on each architecture. Finally, by including versions of varying levels of optimization of the same fundamental algorithm, the benchmarks present opportunities to demonstrate tools and architectures that help programmers get the most out of their parallel hardware. Less optimized versions are presented as challenges to the compiler and architecture research communities: to develop the technology that automatically raises the performance of simpler implementations to the performance level of sophisticated programmer-optimized implementations, or demonstrate any other performance or programmability improvements. We hope that these benchmarks will facilitate effective demonstrations of such technology. <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> GPU Applications <s> Modern vehicles are evolving with more electronic components than ever before (In this paper, “vehicle” means “automotive vehicle.” It is also equal to “car.”) One notable example is graphical processing unit (GPU), which is a key component to implement a digital cluster. To implement the digital cluster that displays all the meters (such as speed and fuel gauge) together with infotainment services (such as navigator and browser), the GPU needs to be virtualized; however, GPU virtualization for the digital cluster has not been addressed yet. This paper presents a Virtualized Automotive DIsplay (VADI) system to virtualize a GPU and its attached display device. VADI manages two execution domains: one for the automotive control software and the other for the in-vehicle infotainment (IVI) software. Through GPU virtualization, VADI provides GPU rendering to both execution domains, and it simultaneously displays their images on a digital cluster. In addition, VADI isolates GPU from the IVI software in order to protect it from potential failures of the IVI software. We implement VADI with Vivante GC2000 GPU and perform experiments to ensure requirements of International Standard Organization (ISO) safety standards. The results show that VADI guarantees 30 frames per second (fps), which is the minimum frame rate for digital cluster mandated by ISO safety standards even with the failure of the IVI software. It also achieves 60 fps in a synthetic workload. <s> BIB004
GPU programs are categorized into graphics acceleration and general-purpose computing workloads. The former category includes 2D and 3D graphics workloads. The latter includes a wide range of general-purpose data parallel computations. Graphics acceleration workloads: 3DMark ] is a GPU benchmark test application developed by Futuremark Corporation for measuring the performance of 3D graphics rendering capabilities. 3DMark evaluates various Direct3D features including tessellation, compute shaders, and multi-threading. The Phoronix Test Suite (PTS) BIB004 ] is a set of open source benchmark applications developed by Phoronix Media. Phoronix performs comprehensive evaluation for measuring the performance of computing systems. GPU software developers usually utilize Phoronix for testing the performance of OpenGL games such as Doom 3, Nexuiz, and Enemy Territory. General-purpose computing workloads: Rodinia BIB001 ] is a benchmark suite focusing on the performance evaluation of compute-intensive applications implemented by CUDA, OpenMP, and OpenCL. Each application or kernel covers different types of behavior of compute-intensive applications, and the suite broadly covers the features of the Berkeley Seven Dwarfs [Asanovic et al. 2006] . The Scalable HeterOgeneous Computing (SHOC) Benchmark Suite BIB002 ] is a set of benchmark programs evaluating the performance and stability of GPGPU computing systems using CUDA and OpenCL applications. The suite supports the evaluation of both cluster-level parallelism with the Message Passing Interface (MPI) and node-level parallelism using multiple GPUs in a single node. The application scope of SHOC includes the Fast Fourier Transform (FFT), linear algebra, and molecular dynamics among others. Parboil BIB003 ] is a collection of compute-intensive applications implemented by CUDA, OpenMP, and OpenCL to measure the throughput of CPU and GPU architectures. Parboil provides collected benchmark applications from diverse scientific and commercial fields. They include bio-molecular simulation, fluid dynamics, image processing, and astronomy. The CUDA SDK benchmark suite ] is released as a part of CUDA Toolkit. It covers a diverse range of GPGPU applications performing data-parallel algorithms used in linear algebra operations, computational fluid dynamics (CFD), image convolution, and Black-Scholes & binomial option pricing.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> System Virtualization <s> Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service.This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort.Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead --- at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> System Virtualization <s> A virtualized system includes a new layer of software, the virtual machine monitor. The VMM's principal role is to arbitrate accesses to the underlying physical host platform's resources so that multiple operating systems (which are guests of the VMM) can share them. The VMM presents to each guest OS a set of virtual platform interfaces that constitute a virtual machine (VM). Once confined to specialized, proprietary, high-end server and mainframe systems, virtualization is now becoming more broadly available and is supported in off-the-shelf systems based on Intel architecture (IA) hardware. This development is due in part to the steady performance improvements of IA-based systems, which mitigates traditional virtualization performance overheads. Intel virtualization technology provides hardware support for processor virtualization, enabling simplifications of virtual machine monitor software. Resulting VMMs can support a wider range of legacy and future operating systems while maintaining high performance. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> System Virtualization <s> Implement a Hyper-V virtualization solution ::: ::: Microsoft Virtualization with Hyper-V shows you how to deploy Microsoft's next-generation hypervisor-based server virtualization technology in a corporate environment. You'll get step-by-step guidelines for getting Hyper-V up and running, followed by best practices for building a larger, fault-tolerant solution using System Center Virtual Machine Manager 2008. This hands-on guide explains how to migrate physical systems to the virtual environment; use System Center Operations Manager; and secure, back up, and restore your Hyper-V solution. ::: ::: Plan and implement a Hyper-V installation ::: Configure Hyper-V components ::: Install and configure System Center Virtual Machine Manager 2008 ::: Create and manage virtual machines ::: Back up and restore virtual machines ::: Monitor, back up, and restore the virtual solution ::: Secure your Hyper-V environment ::: Understand the virtual desktop infrastructure ::: Use third-party virtualization tools for Hyper-V ::: ::: ::: Table of contents ::: 1 Virtualization Overview ::: 2 Planning and Installation ::: 3 Configuring Hyper-V Components ::: 4 Planning and Designing Systems Center Virtual Machine Manager 2008 ::: 5 Installing and Configuring Systems Center Virtual Machine Manager 2008 ::: 6 Configuring Systems Center Virtual Machine Manager 2008 ::: 7 Creating and Managing Virtual Machines ::: 8 Managing Your Virtual Machines ::: 9 Backing Up, Restoring, and Disaster Recovery for Your Virtual Machines ::: 10 Monitoring Your Virtual Solution ::: 11 Hyper-V Security ::: 12 Virtual Desktop Infrastructure ::: A Third Party Virtualization Tools for Hyper-V ::: B Windows Server 2008 Hyper-V Command Line Reference <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> System Virtualization <s> General-purpose GPUs now account for substantial computing power on many platforms, but the management of GPU resources--cycles, memory, bandwidth-- is frequently hidden in black-box libraries, drivers, and devices, outside the control of mainstream OS kernels. We believe that this situation is untenable, and that vendors will eventually expose sufficient information about cross-black-box interactions to enable whole-system resource management. In the meantime, we want to enable research into what that management should look like. ::: ::: We systematize, in this paper, a methodology to uncover the interactions within black-box GPU stacks. The product of this methodology is a state machine that captures interactions as transitions among semantically meaningful states. The uncovered semantics can be of significant help in understanding and tuning application performance. More importantly, they allow the OS kernel to intercept--and act upon--the initiation and completion of arbitrary GPU requests, affording it full control over scheduling and other resource management. While insufficiently robust for production use, our tools open whole new fields of exploration to researchers outside the GPU vendor labs. <s> BIB004
System virtualization allows several operating systems (OSs) to run simultaneously on a single physical machine, thus achieving effective sharing of system resources in personal and shared (e.g., cloud) computing platforms. The software for system virtualization includes a hypervisor, also known as a virtual machine monitor (VMM), and virtual machines (VMs). A hypervisor virtualizes physical resources in the system such as the CPU, memory, and I/O devices. A VM is composed of these virtualized resources and is provided to a guest OS. The guest OS can run on the VM as though the VM were a real physical machine. Popular hypervisors used widely for personal and cloud computing include VMware ESXi ], KVM ], Hyper-V BIB003 , and Xen BIB001 ]. System virtualization can be categorized into three major classes: full, para, and hardware-supported virtualization. Full virtualization completely emulates the CPU, memory, and I/O devices to provide a guest OS with an environment identical to the underlying hardware. Privileged instructions of a guest OS that modify the system state are trapped into the hypervisor by a binary translation technique that automatically inserts trapping operations in the binary code of the guest OS. The advantage of this approach is that guest OSs run in the virtualization environment without modification. However, full virtualization usually exhibits high-performance penalties due to the cost for emulation of the underlying hardware. Para virtualization addresses the performance limitations of full system virtualization by modifying the guest OS code to support more efficient virtualization. Privileged instructions of a guest OS are replaced with hypercalls, which provide a communication channel between the guest OS and the hypervisor. This optimization eliminates the need for binary translation. Para virtualization offers a guest OS an environment similar but not identical to the underlying hardware. The advantage of this approach is that it has lower virtualization overhead than full virtualization. The limitation is that it requires modification to guest OSs, which can be tedious when new versions of an OS kernel or device driver are released. Hardware-supported virtualization requires hardware capabilities such as Intel VTx BIB002 ] to trap privileged instructions from guest OSs. These capabilities typically introduce two operating modes for virtualization: guest (for an OS) and root (for the hypervisor). When a guest OS executes a privileged instruction, the processor intervenes and transfers the control to the hypervisor executing in the root mode. The hypervisor then emulates the privileged instruction and returns the control to guest mode. The mode change from guest to root is called a VM Exit. The reverse action is called a VM Entry. The advantage of this approach is that it does not have to modify a guest OS and that it exhibits higher performance than full virtualization. BIB004 . This approach uses a custom GPU driver based on the available documentation to realize GPU virtualization at the driver level. Para virtualization slightly modifies the custom driver in the guest for delivering sensitive operations directly to the host • GPU scheduling: This indicates whether the solution provides GPU scheduling for fair or SLA-based sharing on GPUs. Details about GPU scheduling are discussed in Section 7.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> API REMOTING <s> Modern graphics co-processors (GPUs) can produce high fidelity images several orders of magnitude faster than general purpose CPUs, and this performance expectation is rapidly becoming ubiquitous in personal computers. Despite this, GPU virtualization is a nascent field of research. This paper introduces a taxonomy of strategies for GPU virtualization and describes in detail the specific GPU virtualization architecture developed for VMware's hosted products (VMware Workstation and VMware Fusion). ::: ::: We analyze the performance of our GPU virtualization with a combination of applications and micro bench-marks. We also compare against software rendering, the GPU virtualization in Parallels Desktop 3.0, and the native GPU. We find that taking advantage of hardware acceleration significantly closes the gap between pure emulation and native, but that different implementations and host graphics stacks show distinct variation. The micro bench-marks show that our architecture amplifies the overheads in the traditional graphics API bottlenecks: draw calls, downloading buffers, and batch sizes. ::: ::: Our virtual GPU architecture runs modern graphics-intensive games and applications at interactive frame rates while preserving virtual machine portability. The applications we tested achieve from 86% to 12% of native rates and 43 to 18 frames per second with VMware Fusion 2.0. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> API REMOTING <s> Today's operating systems treat GPUs and other computational accelerators as if they were simple devices, with bounded and predictable response times. With accelerators assuming an increasing share of the workload on modern machines, this strategy is already problematic, and likely to become untenable soon. If the operating system is to enforce fair sharing of the machine, it must assume responsibility for accelerator scheduling and resource management. Fair, safe scheduling is a particular challenge on fast accelerators, which allow applications to avoid kernel-crossing overhead by interacting directly with the device. We propose a disengaged scheduling strategy in which the kernel intercedes between applications and the accelerator on an infrequent basis, to monitor their use of accelerator cycles and to determine which applications should be granted access over the next time interval. Our strategy assumes a well defined, narrow interface exported by the accelerator. We build upon such an interface, systematically inferred for the latest Nvidia GPUs. We construct several example schedulers, including Disengaged Timeslice with overuse control that guarantees fairness and Disengaged Fair Queueing that is effective in limiting resource idleness, but probabilistic. Both schedulers ensure fair sharing of the GPU, even among uncooperative or adversarial applications; Disengaged Fair Queueing incurs a 4% overhead on average (max 18%) compared to direct device access across our evaluation scenarios. <s> BIB002
Virtualizing GPUs has been regarded as more difficult than virtualizing I/O devices such as network cards or disks. Several reasons add complexity to multiplexing and sharing GPU resources between VMs. First, GPU vendors tend not to reveal the source code and implementation details of their GPU drivers for commercial reasons. Such technical specifications are essential for virtualizing GPUs at the driver level. Second, even when driver implementations are unveiled, for example, by reverse engineering methods BIB002 , GPU vendors still introduce significant changes with every new generation of GPUs to improve performance. As a consequence, specifications revealed by reverse engineering become unusable. Finally, some OS vendors provide proprietary GPU drivers for virtualization, but the proprietary drivers cannot be used across all OSs. In summary, there are no standard interfaces for accessing GPUs, which are required for virtualizing these devices. The API remoting approach overcomes the aforementioned limitations and is now the most prevalent approach to GPU virtualization. The premise of API remoting is to provide a guest OS with a wrapper library that has the same API as the original GPU library. The wrapper library intercepts GPU calls (e.g., OpenGL, Direct3D, CUDA, and OpenCL calls) from an application before the calls reach the GPU driver in the guest OS. The intercepted calls are redirected to the host OS in the same machine through shared memory or a remote machine with available GPUs. The redirected calls are processed remotely and only the results are delivered to the application through the wrapper library. The API remoting approach can emulate a GPU execution environment without exposing physical GPU devices in the guest OS. Figure 2 illustrates an example of a system that adopts the API remoting approach, which forwards GPU calls in the guest to the host in the same machine. The architecture adopts a split device model where the frontend and backend drivers are placed in the guest and host OSs, respectively. The wrapper library installed in the guest OS intercepts a GPU call from the application and delivers it to the frontend driver. The frontend packs the GPU operation with its parameters into a transferable message and sends the message to the backend in the host OS via shared memory. In the host OS, the backend driver parses the message and converts it into the original GPU call. The call handler executes the requested operation on the GPU through the GPU driver. The call handler returns the result back to the application via the reverse path. The key advantage of this approach is that it can support applications using GPUs without recompilation in most cases. The wrapper library can be dynamically linked to existing applications at runtime. In addition, it incurs negligible virtualization overhead as the virtualization architecture is simple and bypasses the hypervisor layer . Finally, as the virtualization layer is usually implemented in user space, this approach can be agnostic on underlying hypervisors , specifically if it does not use hypervisor-specific inter-VM communication methods. The limitation is that keeping the wrapper libraries updated can be a daunting task as new functions are gradually added to vendor GPU libraries BIB002 ]. In addition, as GPU requests bypass the hypervisor, it is difficult to implement basic virtualization features such as execution checkpointing, live migration, and fault tolerance BIB001 .
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for Graphics Acceleration <s> Providing untrusted applications with shared and safe access to modern display hardware is of increasing importance. Our new display system, called Blink, safely multiplexes complex graphical content from multiple untrusted Virtual Machines onto a single Graphics Processing Unit (GPU). Blink does not allow clients to program the GPU directly, but instead provides a virtual processor abstraction which they can program. Blink executes virtual processor programs and controls the GPU on behalf of the client, in a manner that reduces processing and context switching overheads. Blink provides its own stored procedure abstraction for ecient hardware access, but also supports fast emulation of legacy OpenGL programs. To achieve performance and safety, Blink employs just-in-time compilation and simple program inspection. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for Graphics Acceleration <s> Security is an emerging topic in the field of mobile and embedded platforms. The Trusted Computing Group (TCG) has outlined one possible approach to mobile platform security by recently extending their set of Trusted Computing specifications with Mobile Trusted Modules (MTMs). The MTM specification [13] published by the TCG is a platform independent approach to Trusted Computing explicitly allowing for a wide range of potential implementations. ARM follows a different approach to mobile platform security, by extending platforms with hardware supported ARM TrustZone security [3] mechanisms. This paper outlines an approach to merge TCG-style Trusted Computing concepts with ARM TrustZone technology in order to build an open Linux-based embedded trusted computing platform. <s> BIB002
Chromium is an early example of API remoting. In the past, graphics processors could not be fully utilized by a number of applications in the same machine because the hosts were using slow serial interfaces to the graphic cards. The goal of Chromium is to aggregate GPU calls from different machines and to process them in a powerful cluster rendering system with multiple graphics accelerators. For this purpose, Chromium provides four OpenGL wrapper libraries that encapsulate frequently used operations: stream packing, stream unpacking, point-topoint connection-based networking abstractions, and complete OpenGL state tracker libraries. These libraries intercept OpenGL operations and transfer them to a rendering cluster. VMGL ] implements the API remoting approach for accelerating OpenGL applications in recent hypervisors including Xen and VMware. It provides hardware accelerated rendering abilities to OpenGL applications in each VM. VMGL consists of the following three modules: the VMGL library, the VMGL stub, and the VMGL X server extension. The VMGL library is an OpenGL wrapper library that replaces standard implementations. The VMGL stub is created in the host for each VMGL library instance to receive and process GPU requests from the library. OpenGL commands are delivered by a network transport, which makes VMGL agnostic of underlying hypervisors. The VMGL X server extension runs in the guest OS side and is used to register the size and visibility of OpenGL-enabled windows. Through the registered information, the VMGL stub only processes a region that can be visible in the guest OS's desktop. VMGL additionally supports suspend and resume functionalities by keeping track of the entire OpenGL state in the guest and restoring the state in a new stub. Blink BIB001 ] offers accelerated OpenGL rendering abilities to applications inside a VM similarly to VMGL but focuses more on performance optimization. Blink provides the BlinkGL wrapper library for guest OSs, which is a superset of OpenGL. The wrapper library provides stored procedures, each of which is a sequence of serialized BlinkGL commands to eliminate the performance overhead of additionally (de-)serializing GL command streams during communication between the guest and the host. The Blink Server in the host interprets the transferred stored procedure by using a Just-In-Time (JIT) compiler. The host and the guest OSs communicate with each other through shared memory to reduce the overhead of using a network transport on large texture or frame buffer objects. The Parallels Desktop ] offers a proprietary GPU driver for guest OSs to offload their OpenGL and Direct3D operations onto remote devices with GPUs. The proprietary GPU driver can be installed only on Parallels products such as Parallels Desktop and Parallels Workstation. The server module in the remote device receives access requests from a number of remote VMs and chooses a next VM to use the GPU. The module then sends a token to the selected guest and allows it to occupy the GPU for a specific time interval. The guest OS and the remote OS use a remote procedure call (RPC) protocol for delivering GPU commands and the corresponding results. VADI [Lee et al. 2016] implements GPU virtualization for vehicles by multiplexing a GPU device used by the digital cluster of a car. VADI works on a proprietary hypervisor for vehicles called the Secure Automotive Software Platform (SASP) . This hypervisor exploits the ARM TrustZone technology BIB002 ], which can accommodate two guest OSs in the secure and normal "worlds", respectively. VADI implements the GL wrapper library and the V-Bridge-normal in the normal world and the GL stub and the V-Bridge-secure in the secure one. GPU commands executed in the normal world are intercepted by the wrapper library and processed in the secure world by the GL stub. Each V-Bridge is connected by shared memory and is responsible for communication between the two worlds.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service.This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort.Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead --- at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> Recent embedded systems such as mobile phones provide various multimedia applications and execute them concurrently. Embedded systems are being developed to provide new unique solutions and services. This forces manufacturers of embedded systems to select a more productive and sophisticated operating system such as Linux. However, there are several limitations in using a general purpose operating system (GPOS) because of software instability and workload of transporting legacy real-time applications into the system. A good solution is to use a virtualization technology such as Xen, which allows many guest operating systems to be executed simultaneously and in a stable manner on one physical machine. If Xen is applied to embedded systems such as mobile phones, the Xen can execute the legacy real-time operating system for critical tasks and also GPOS executing user-friendly task. In this case, the Xen should provide the IPC mechanism between processes on each operating system on Xen. However, Xen doesn't provide a simple way for sharing data between processes running on different guest operating systems. In this paper, we propose a simple method for sharing data between the processes in different guest operating systems by using a mechanism provided by Xen. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> Advances in virtualization technology have focused mainly on strengthening the isolation barrier between virtual machines (VMs) that are co-resident within a single physical machine. At the same time, a large category of communication intensive distributed applications and software components exist, such as web services, high performance grid applications, transaction processing, and graphics rendering, that often wish to communicate across this isolation barrier with other endpoints on co-resident VMs. State of the art inter-VM communication mechanisms do not adequately address the requirements of such applications. TCP/UDP based network communication tends to perform poorly when used between co-resident VMs, but has the advantage of being transparent to user applications. Other solutions exploit inter-domain shared memory mechanisms to improve communication latency and bandwidth, but require applications or user libraries to be rewritten against customized APIs - something not practical for a large majority of distributed applications. In this paper, we present the design and implementation of a fully transparent and high performance inter-VM network loopback channel, called XenLoop, in the Xen virtual machine environment. XenLoop does not sacrifice user-level transparency and yet achieves high communication performance between co-resident guest VMs. XenLoop intercepts outgoing network packets beneath the network layer and shepherds the packets destined to co-resident VMs through a high-speed inter-VM shared memory channel that bypasses the virtualized network interface. Guest VMs using XenLoop can migrate transparently across machines without disrupting ongoing network communications, and seamlessly switch between the standard network path and the XenLoop channel. In our evaluation using a number of unmodified benchmarks, we observe that XenLoop can reduce the inter-VM round trip latency by up to a factor of 5 and increase bandwidth by a up to a factor of 6. <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> Despite advances in high performance inter-domain communication for virtual machines (VM), data intensive applications developed for VMs based on traditional remote procedure call (RPC) mechanism still suffer from performance degradation due to the inherent inefficiency of data serialization/deserilization operation. This paper presents VMRPC, a light-weight RPC framework specifically designed for VMs that leverages heap and stack sharing to circumvent unnecessary data copying and serialization/deserilization, and achieve high performance. Our evaluation shows that the performance of VMRPC is an order of magnitude better than traditional RPC systems and existing alternative inter-domain communication mechanisms. We adopt VMRPC in a real system, and the experiment results exhibit that the performance of VMRPC is even competitive to native environment. <s> BIB004 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> Past research in virtualisation technology has mainly focused on increasing isolation of co-resident virtual machines. At the same time network intensive applications, such as web services or database applications are being consolidated onto a single physical platform. The isolation properties of virtualisation, however, demand a strict separation of the shared resources. Co-resident virtual machines are therefore forced to fallback to inefficient network emulation for communication. Many inter virtual machine communication methods proposed recently, introduced shared memory, customised libraries or APIs. This is not only unpractical but can also undermine a system’s integrity; moreover transparency and live migration is commonly neglected. Therefore in this paper we discuss the challenges and requirements for inter virtual machine communication and examine available solutions proposed by academia and industry. We also discuss how the current evolution of virtualisation and modern CPUs pose new challenges for inter virtual machine communication. Finally, we consider the possibility of utilising previously unused CPU capabilities to accommodate an inter virtual machine communication mechanism. <s> BIB005 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> In this work we present an hypervisor-independent GPU Virtualization Service named GVirtus. It instantiates virtual machines able to access to the GPU in a transparent way. GPUs allow to speed up calculations over CPUs. Therefore, virtualizing GPUs is a major trend and can be considered a revolutionary tool for HPC. To test the performances of GVirtus we used a fluid simulator. Morover to exploit the computational power of GPUs in cloud computing we virtualized three different plugins for GVirtus Framework : Cuda Runtime, Cuda Driver and OpenCL plugins. Our results show that the overhead introduced by virtualization is almost irrelevant. <s> BIB006 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods for GPGPU Computing <s> Virtualization technology, in the computer architecture domain, is characterized by the property of sharing system resources among the virtual machines (VMs) hosted on a physical server. Isolation is also a characteristic property of virtualization, which prevents the guest operating system running on these virtual machines to be aware of each other's presence on the same host. <s> BIB007
Following NVIDIA's launch of CUDA in 2006, general-purpose computing on GPUs (GPGPU) became more popular and practical. NVIDIA's CUDA conceals the underlying graphics hardware architecture from developers and allows programmers to write scalable programs without learning new programming languages. Research on GPU virtualization has focused more on GPGPUs since the introduction of CUDA to accelerate compute-intensive applications running in the Cloud. GViM implements GPU virtualization at the CUDA API level in the Xen hypervisor BIB001 ]. To enable a guest VM to access the GPU located in the host, GViM implements the Interposer CUDA Library for guest OSs, and the frontend and backend drivers for communication between the host and the guest. GViM focuses on efficient sharing of large data volumes between the guest and the host when a GPU application is data intensive. For this purpose, GViM furnishes shared memory allocated by Xenstore BIB002 between the frontend and backend, instead of using a network transport. It further develops the one-copy mechanism that maps the shared memory into the address space of the GPU application. This removes data copying from user space to kernel space in the guest OS and improves communication performance. vCUDA ] also implements GPU virtualization in the Xen hypervisor. vCUDA provides a CUDA wrapper library and virtual GPUs (vGPUs) in the guest and the vCUDA stub in the host. The wrapper library intercepts and redirects API calls from the guest to the host. vGPUs are created per application by the wrapper library and give a complete view of the underlying GPUs to applications. The vCUDA stub creates an execution context for each guest OS and executes remote GPU requests. For communication between VMs, vCUDA adopts XML-RPC [Cerami 2002 ], which supports high-level communication between the guest and the host. In the latest version , vCUDA is ported to KVM using VMRPC BIB004 with VMCHANNEL BIB007 . VMRPC utilizes a shared memory zone between the host OS and the guest OS to reduce the overhead of using XML-RPC transmission with TCP/IP. VMCHANNEL enables an asynchronous notification mechanism in KVM to reduce the latency of inter-VM communication. vCUDA also develops Lazy RPC that performs batching specific CUDA calls that can be delayed (e.g., cudaConfigureCall()). This prevents frequent context switching between the guest OS and the hypervisor occurred by repeated RPCs and improves communication performance. rCUDA ] focuses on remote GPU-based acceleration, which offloads CUDA computation parts onto GPUs located in a remote host. rCUDA recognizes that prior virtualization research based on emulating local devices is not appropriate for HPC applications because of unacceptable virtualization overhead. Instead of device emulation, rCUDA implements virtual CUDA-compatible devices by adopting remote GPU-based acceleration without the hypervisor layer. More concretely, rCUDA provides a CUDA API wrapper library to the client side, which intercepts and forwards GPU calls from the client to the GPU server, and the server daemon in the server side, which receives and executes the remote GPU calls. The client and the server communicate with each other using a TCP/IP socket. rCUDA points out network performance bottlenecks when several clients concurrently access the remote GPU cluster. To overcome this issue, rCUDA provides a customized application-level communication protocol . Current rCUDA supports EDR 100G InfiniBand using Mellanox adapters for providing higher network bandwidth . GVirtuS implements a CUDA wrapper library, the frontend and backend drivers, and communicators supporting various hypervisors including KVM, Xen, and VMware. The frontend and backend drivers are placed in the guest and the host, respectively. The two drivers communicate with each other by a communicator specific to each hypervisor. GVirtuS identifies that the performance of GPU virtualization depends on communication throughput between the frontend and the backend. To address this issue, GVirtuS implements pluggable communication components that utilize high performance communication channels provided by the hypervisors. The communicators for KVM, Xen, and VMware employ VMSocket, XenLoop BIB003 , and the VMware Communicator Interface (VMCI) BIB005 as communication channels, respectively. In the latest version, the VMShm communicator BIB006 , which leverages shared memory, was introduced for better communication performance. It allocates a POSIX shared memory chunk on the host OS and allows both the backend and frontend to map the memory for communication. GVirtuS also provides a TCP/IP-based communicator for remote GPU-based acceleration. GVM ] is based on a model that predicts the performance of GPU applications. GVM validates this model by introducing its own virtualization infrastructure, which consists of the user process APIs, the GPU Virtualization Manager (GVM), and the virtual shared memory. The user process APIs expose virtual GPU resources to programmers. The source code then needs to be modified to contain the APIs to utilize the virtual GPUs. GVM runs in the host OS and is responsible for initializing the virtual GPUs, receiving requests from guest OSs, and passing them to physical GPUs. The virtual shared memory is implemented as POSIX shared memory by which the guest and host OSs can communicate with each other. Pegasus ] advances its predecessor, GViM ], by managing virtualized accelerators as first class schedulable and shareable entities. For this purpose, Pegasus introduces the notion of an accelerator virtual CPU (aVCPU), which embodies the state of a VM executing GPU calls on accelerators, similarly to the concept of virtual CPUs (VCPUs). In Pegasus, an aVCPU is a basic schedulable entity and consists of a shared call buffer per domain, a polling thread in the host OS, and the CUDA runtime context. GPU requests from a guest OS are stored in the call buffer shared between the frontend and backend drivers. A polling thread selected by the GPU scheduler then fetches the GPU requests from the buffer and passes them to the actual CUDA runtime in the host OS. The scheduling methods Pegasus adopts will be introduced in Section 7.2.2. Shadowfax ] enhances its predecessor, Pegasus . Shadowfax tackles the problem that under Pegasus applications requiring significant GPU computational power are limited to using only local GPUs, although remote nodes may boast additional GPUs. To address this issue, Shadowfax presents the concept of GPGPU assemblies, which can configure diverse virtual platforms based on application demands. This virtual platform concept allows applications to run across node boundaries. For local GPU execution, Shadowfax adopts the GPU virtualization architecture of Pegasus. For remote execution, Shadowfax implements a remote server thread that creates a fake guest VM environment, which consists of a call buffer and a polling thread per VM in the remote machine. To reduce remote execution overhead, Shadowfax additionally does batching of GPU calls and their data. VOCL ] presents a GPU virtualization solution for OpenCL applications. Similarly to rCUDA , VOCL adopts remote GPU-based acceleration to provide virtual devices supporting OpenCL. VOCL provides an OpenCL wrapper library on the client side and a VOCL proxy process on the server side. The proxy process receives inputs from the library and executes them on remote GPUs. The wrapper library and the proxy process communicate via MPI . The authors claim that MPI can provide a rich communication interface and dynamically establish communication channels compared to other transport methods. DS-CUDA ] provides a remote GPU virtualization platform similarly to rCUDA . It is composed of a compiler, which translates CUDA API calls to respective wrapper functions, and a server, which receives GPU calls and their data via an InfiniBand IBverb or RPC socket. Compared to other similar solutions, DS-CUDA implements redundant calculations to improve reliability where two different GPUs in the cluster perform the same calculation to ensure that the result from the cluster is correct.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Para Virtualization <s> Modern graphics co-processors (GPUs) can produce high fidelity images several orders of magnitude faster than general purpose CPUs, and this performance expectation is rapidly becoming ubiquitous in personal computers. Despite this, GPU virtualization is a nascent field of research. This paper introduces a taxonomy of strategies for GPU virtualization and describes in detail the specific GPU virtualization architecture developed for VMware's hosted products (VMware Workstation and VMware Fusion). ::: ::: We analyze the performance of our GPU virtualization with a combination of applications and micro bench-marks. We also compare against software rendering, the GPU virtualization in Parallels Desktop 3.0, and the native GPU. We find that taking advantage of hardware acceleration significantly closes the gap between pure emulation and native, but that different implementations and host graphics stacks show distinct variation. The micro bench-marks show that our architecture amplifies the overheads in the traditional graphics API bottlenecks: draw calls, downloading buffers, and batch sizes. ::: ::: Our virtual GPU architecture runs modern graphics-intensive games and applications at interactive frame rates while preserving virtual machine portability. The applications we tested achieve from 86% to 12% of native rates and 43 to 18 frames per second with VMware Fusion 2.0. <s> BIB001
VMware SVGA II BIB001 ] is a GPU virtualization approach provided by VMware's hosted products including VMware Workstation and VMware Fusion. The authors point out that API remoting is straightforward to implement but completely surrenders interposition that allows the hypervisor to arbitrate hardware access between a VM and the physical hardware. This makes it difficult for the hypervisor to implement basic virtualization features such as suspend-to-disk and live migration. VMware provides the VMware SVGA Driver built based on the open documentation of AMD GPUs [AMD 2009 ]. This driver replaces the original GPU driver in the guest. The SVGA Driver is given access to a virtual GPU called VMware SVGA II created by the hypervisor. This is not a physical graphics card but acts like a physical one by providing three virtual hardware resources: registers, Guest Memory Regions (GMRs), and a FIFO command queue. The registers are used for hardware management such as mode switching and IRQ acknowledgment. The GMRs emulate physical GPU VRAM and are allocated in the host memory. The FIFO command queue, which adopts a lockfree data structure to eliminate synchronization overhead, receives GPU commands from the guest. The backend in the host, called Mouse-Keyboard-Screen (MKS), then fetches and issues the GPU requests asynchronously from the FIFO queue. VMware SVGA II focuses on supporting graphics acceleration rather than GPGPU computing. LoGV ] implements para virtualization in KVM using a modified PathScale GPU driver (pscnv) [PathScale 2012] in the guest OS. This is an open source driver implemented based on reverse engineering of NVIDIA drivers. The key point of LoGV's virtualization is to partition the GPU device memory into several pieces and to grant a guest OS direct access to its own portion. Modern GPUs have their own memory management unit (MMU) to map the partitioned GPU memory into the GPU application's address space. By managing and configuring each GPU page table referred to by the GPU MMU, LoGV allows each VM to access the mapped region without involvement from the hypervisor. LoGV only mediates memory allocation or mapping operations to ensure that any VM does not establish mapping to other VMs' address spaces. The GPU driver in the guest OS is modified for this purpose to send these sensitive operations to the hypervisor. The virtual device in the hypervisor is then responsible for validation of these requests. After any necessary checks, the virtual device delivers the allocation and mapping requests to the GPU driver in the host, which performs the actual allocation. A command submission channel is virtualized in the same way; GPU applications can send commands to GPUs without intervention from the hypervisor after a request for creating a virtual command channel is validated. VGRIS ] adopts the para virtualization technique implemented in VMware SVGA II BIB001 for gaming applications in virtualization environments. Using VMware player 4.0, VGRIS further develops an agent for each VM in the host, which is responsible for monitoring the performance of each individual VM and sending this information to the GPU scheduler. The scheduling method will be explained in Section 7.2.2. developed a para virtualization solution for the Heterogeneous System Architecture (HSA) ] proposed by AMD. HSA combines a CPU and a GPU on the same silicon die to relieve the communication overhead between them. The authors try to address this architectural change in KVM-based virtualization. First, HSA realizes shared virtual memory where the CPU and the GPU share the same virtual address space. To virtualize this concept, the authors just assign the page tables used by CPU MMU to GPU IOMMU to reuse them as GPU shadow page tables. Second, HSA generates an interrupt when a GPU memory access generates a page fault; by this trigger, the CPU can modify the corresponding page table for the GPU. To virtualize this feature, the authors modify the shadow page table on the interrupt and also notify the corresponding guest OS to modify its guest page table. Shadow page tables are actually used for address translation, but the guest page table modification is also required for maintaining system integrity. The GPU driver in the guest OS is modified to deal with this notification. Finally, HSA allows the CPU and the GPU to have a shared buffer in user space to store GPU commands. To virtualize this feature, the authors simply let the GPU know the address of a shared buffer in the guest so the GPU can fetch GPU commands directly from the guest.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Full Virtualization <s> Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service.This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort.Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead --- at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Full Virtualization <s> The most popular I/O virtualization method today is paravirtual I/O. Its popularity stems from its reasonable performance levels while allowing the host to interpose, i.e., inspect or control, the guest's I/O activity. ::: ::: We show that paravirtual I/O performance still significantly lags behind that of state-of-the-art noninterposing I/O virtualization, SRIOV. Moreover, we show that in the existing paravirtual I/O model, both latency and throughput significantly degrade with increasing number of guests. This scenario is becoming increasingly important, as the current trend of multi-core systems is towards an increasing number of guests per host. ::: ::: We present an efficient and scalable virtual I/O system that provides all of the benefits of paravirtual I/O. Running host functionality on separate cores dedicated to serving multiple guest's I/O combined with a fine-grained I/O scheduling and exitless notifications our I/O virtualization system provides performance which is 1.2×-3× better than the baseline, approaching and in some cases exceeding non-interposing I/O virtualization performance. <s> BIB002
GPUvm ] implements both full and para virtualization in the Xen hypervisor by using a Nouveau driver [X.OrgFoundation 2011] in the guest OS side. To isolate multiple VMs on a GPU in full virtualization, GPUvm partitions both physical GPU memory and the MMIO region into several pieces and assigns each portion to an individual VM. A GPU shadow page table per VM enables access to the partitioned memory by translating the virtual GPU addresses to the physical GPU addresses of the partitioned memory. Each shadow page table is updated on TLB flush. In CPU virtualization, the hypervisor updates shadow page tables when page faults occur. However, GPUvm cannot deal with page faults from the GPU because of a limitation of the current NVIDIA GPU design. Therefore, GPUvm should scan the entire page table on every TLB flush. The partitioned MMIO region is configured as read-only so every GPU access from a guest can generate a page fault. The OS then intercepts and emulates the access in the driver domain of Xen. Because the number of command submission channels (explained in Section 2.1) is limited in hardware, GPUvm also virtualizes them by creating shadow channels and mapping a virtual channel to a shadow channel. Actually, this full virtualization technique shows poor performance for the following reasons: (1) the interception of every GPU access and (2) the scanning of the entire page table on every TLB flush. GPUvm addresses the first limitation with BAR Remap, which only intercepts GPU calls related to accesses to GPU channel descriptors. A possible isolation issue caused by passing through other GPU accesses is addressed by utilizing shadow page tables, which isolate BAR area accesses among VMs. For the second limitation, GPUvm suggests a para virtualization technique. Similarly to Xen BIB001 ], GPUvm constructs guest GPU page tables and allows VMs to use these page tables directly instead of shadow page tables. The guest driver issues hypercalls to GPUvm when its GPU page table needs to be updated. GPUvm then validates these requests for isolation between VMs. gVirt ] is based on its previous work, XenGT , and implements full GPU virtualization for Intel on-chip GPUs in the Xen hypervisor. It focuses on graphics acceleration rather than GPGPU computing. gVirt asserts that the frame and command buffers are the most performance-critical resources in GPUs. It allows each VM to access the two buffers directly (pass-through) without intervention from the hypervisor. For this purpose, the graphics memory resource is partitioned by the gVirt Mediator so each VM can have its own frame and command buffers in the partitioned memory. At the same time, privileged GPU instructions are trapped and emulated by the gVirt Mediator in the driver domain of Xen. This enables secure isolation among multiple VMs without significant performance loss. The whole process is called mediated pass-through. KVMGT ] is a ported version of gVirt for KVM and has been integrated into the mainline Linux kernel since version 4.10. gHyvi points out that its predecessor, gVirt, suffers from severe performance degradation when a GPU application in a VM performs frequent updates on guest GPU page tables. This modification causes excessive VM exits BIB002 , which are expensive operations in hardware-based virtualization. This is also known as the massive update issue. gHyvi introduces relaxed page table shadowing, which removes the write-protection of the page tables to avoid excessive trapping. The technique rebuilds the guest page tables at a later point of time when rebuilding is required. This lazy reconstruction is possible because the modification to the guest page tables will not take effect before relevant GPU operations are submitted to the GPU command buffer. gScale ] solves gVirt's scalability limitation. gVirt partitions the global graphics memory (2GB) into several fixed size regions and allocates them to vGPUs. Due to the recommended memory allocation for each vGPU (e.g., 448MB in Linux), gVirt limits the total number of vGPUs to 4. gScale overcomes this limitation by making the GPU memory shareable. For the high graphics memory in Intel GPUs, gScale allows each vGPU to maintain its own private shadow graphics translation table (GTT). Each private GTT translates the vGPU's logical graphics address to any physical address in the high memory. On context switching, gScale copies the next vGPU's private GTT to the physical GTT to activate the vGPU's graphics address space. For the low memory, which is also accessible by the CPU, gScale introduces Ladder mapping combined with private shadow GTTs. As virtual CPUs and GPUs are scheduled asynchronously, a virtual CPU may access illegal memory if it refers to the current graphics address space. Ladder mapping modifies the Extended Page Table ( EPT) used by the virtual CPU so it can bypass the graphics memory space. With these schemes, gScale can host up to 15 vGPUs for Linux VMs and 12 for Windows VMs. The scheduling method that gScale adopts is discussed in Section 7.2.2.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods Supporting a Single VM <s> SUMMARY This paper describes the LINPACK Benchmark and some of its variations commonly used to assess the performance of computer systems. Aside from the LINPACK Benchmark suite, the TOP500 and the HPL codes are presented. The latter is frequently used to obtained results for TOP500 submissions. Information is also given on how to interpret the results of the benchmark and how the results fit into the performance evaluation process. Copyright c � 2003 John Wiley & Sons, Ltd. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods Supporting a Single VM <s> PURPOSE ::: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). ::: ::: ::: METHODS ::: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). ::: ::: ::: RESULTS ::: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. ::: ::: ::: CONCLUSIONS ::: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods Supporting a Single VM <s> Virtualization poses new challenges to I/O performance. The single-root I/O virtualization (SR-IOV) standard allows an I/O device to be shared by multiple Virtual Machines (VMs), without losing performance. We propose a generic virtualization architecture for SR-IOV-capable devices, which can be implemented on multiple Virtual Machine Monitors (VMMs). With the support of our architecture, the SR-IOV-capable device driver is highly portable and agnostic of the underlying VMM. Because the Virtual Function (VF) driver with SR-IOV architecture sticks to hardware and poses a challenge to VM migration, we also propose a dynamic network interface switching (DNIS) scheme to address the migration challenge. Based on our first implementation of the network device driver, we deployed several optimizations to reduce virtualization overhead. Then, we conducted comprehensive experiments to evaluate SR-IOV performance. The results show that SR-IOV can achieve a line rate throughput (9.48 Gbps) and scale network up to 60 VMs, at the cost of only 1.76% additional CPU overhead per VM, without sacrificing throughput and migration. <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods Supporting a Single VM <s> The Westmere processor is implemented on a high-к metal-gate 32nm process technology [1] as a compaction of the Nehalem processor family [2]. Figure 5.1.1 shows the 6-core dual-socket server processor and the 2-core single-socket processor for mainstream client. This paper focuses on innovations and circuit optimizations made to the 6-core processor. The 6-core design has 1.17B transistors including the 12MB shared L3 Cache and fits in approximately the same die area as its 45nm 4-core 8MB-L3-cache Nehalem counterpart. The core supports new instructions for accelerating encryption/decryption algorithms, speeds up performance under virtualized environments, and contains a host of other targeted performance features. <s> BIB004 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods Supporting a Single VM <s> The usage and adoption of General Purpose GPUs (GPGPU) in HPC systems is increasing due to the unparalleled performance advantage of the GPUs and the ability to fulfill the ever-increasing demands for floating points operations. While the GPU can offload many of the application parallel computations, the system architecture of a GPU-CPU-InfiniBand server does require the CPU to initiate and manage memory transfers between remote GPUs via the high speed InfiniBand network. In this paper we introduce for the first time a new innovative technology--GPUDirect that enables Tesla GPUs to transfer data via InfiniBand without the involvement of the CPU or buffer copies, hence dramatically reducing the GPU communication time and increasing overall system performance and efficiency. We also explore for the first time the performance benefits of GPUDirect using Amber and LAMMPS applications. <s> BIB005 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Methods Supporting a Single VM <s> This paper presents a GPU-assisted version of the LIBSVM library for Support Vector Machines. SVMs are particularly popular for classification procedures among the research community, but for large training data the processing time becomes unrealistic. The modification that is proposed is porting the computation of the kernel matrix elements to the GPU, to significantly decrease the processing time for SVM training without altering the classification results compared to the original LIBSVM. The experimental evaluation of the proposed approach highlights how the GPU-accelerated version of LIBSVM enables the more efficient handling of large problems, such as large-scale concept detection in video. <s> BIB006
Amazon Elastic Compute Cloud (Amazon EC2) ] is the first cloud hosting service that supports GPUs for cloud tenants by using the Intel GPU pass-through technology . In 2010, Amazon EC2 introduced Cluster GPU Instances (CGIs), which provide two NVIDIA Tesla GPUs per VM . CGIs can support HPC applications requiring massive parallel processing power by exposing native GPUs to each guest OS directly. explored the performance of a cluster of 32 CGIs in Amazon EC2. They tested the SHOC and Rodinia benchmark suites as synthetic kernels, NAMD and MC-GPU BIB002 as real-world applications in science and engineering, and the HPL benchmark ] as a widely used implementation of Linpack BIB001 ]. They measured the performance both in virtualization using Amazon EC2 CGIs and in a native environment using their own cluster. The authors show that computationally intensive programs can generally take full advantage of GPUs in the cloud setting. However, memory-intensive applications can experience a small penalty because Amazon EC2 CGIs enable ECC memory error check features, which can limit memory bandwidth. Also, network-intensive GPU applications may suffer from virtualized network access, which reduces scalability. implemented a GPU pass-through system using Xen and KVM and performed performance analysis of CUDA applications. The authors explain how to enable GPU pass-through technically in both hypervisors and evaluate the performance of the CUDA SDK benchmark suite. The authors claim that the GPU performance by using the Intel pass-through technology in both hypervisors is similar to the performance in a native environment. Shea and Liu [2013] explored the performance of cloud gaming in a GPU passthrough environment. They found that some gaming applications perform poorly when they are deployed in a VM using a dedicated GPU. This is because the virtualized environment cannot secure enough memory bandwidth while transferring data between the host and the GPU compared with a native environment. The authors identify that the performance in KVM is less than 59% of that of their bare-metal system. By detailed profiling, some gaming applications are observed to generate frequent context switching between the VM and the hypervisor to process memory access requests during memory transfers, which brings memory bandwidth utilization down. and evaluated the performance of a Xen VM infrastructure using a PCI pass-through technology and the SHOC benchmark. The authors found that there is only a 1.2% performance penalty in the worst case in the Kepler K20m GPU-enabled VM, whereas the API remoting approach incurs performance overhead up to 40%. In more recent work , they evaluated HPC workloads in a virtualized cluster using PCI pass-through with SR-IOV BIB003 and GPUDirect BIB005 . SR-IOV is a hardware-assisted network virtualization technology that provides near-native bandwidth on 10-Gigabit connectivity within VMs. GPUDirect reduces the overhead of data transfers across GPUs by supporting direct RDMA between GPUs on an InfiniBand interconnect. For evaluation, they used two molecular dynamics (MD) applications: large-scale atomic/molecular massively parallel simulator (LAMMPS) ] and highly optimized object-oriented molecular dynamics (HOOMD) . The authors observe that the MD applications using MPI and CUDA can run at near-native performance with only 1.9% and 1.5% overheads for LAMMPS and HOOMD, respectively, compared to their execution in a non-virtualized environment. characterized the performance of VMWare ESXi, KVM, Xen, and Linux Containers (LXC) using the PCI pass-through mode. They tested the CUDA SHOC and OpenCL SDK benchmark suites as microbenchmarks and the LAMMPS molecular dynamics simulator ], GPU-LIBSVM BIB006 , and the LULESH shock hydrodynamics simulator as application benchmarks. The authors observe that KVM consistently yields nearnative performance in all benchmark programs. VMWare ESXi performs well in the Sandy Bridge microarchitecture ], but not in the Westmere microarchitecture BIB004 . The authors speculate that VMWare ESXi is optimized for more recent microarchitectures. Xen consistently shows average performance among the hypervisors. Finally, LXC performs closest to the native environment because LXC guests share a single Linux kernel. tried to overcome the inability of GPU pass-through to support sharing of a single GPU between multiple VMs. In GPU pass-through environments, a GPU can be dedicated only to a single VM when the VM boots. The GPU cannot be deallocated until the VM is shutdown. The authors implemented coarse-grained sharing by utilizing the hot plug functionality of PCIe channels, which can install or remove GPU devices dynamically. To realize this implementation, a CUDA wrapper library is provided to VMs to monitor the activity of GPU applications. If an application requires a GPU, then a GPU allocation request is sent to Virtual Machine 0 (VM0), which is the host OS in KVM or domain 0 in Xen, by the wrapper library. The GPU-Admin in VM0 mediates this request and attaches an available GPU managed by the GPU pool to the VM. When the application finishes its execution, the wrapper library sends a de-allocation request to the GPU-Admin. The GPU-Admin then returns the GPU into the GPU pool.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> SCHEDULING METHODS <s> The inexorable demand for computing power has lead to increasing interest in accelerator-based designs. An accelerator is specialized hardware unit that can perform a set of tasks with much higher performance or power efficiency than a general-purpose CPU. They may be embedded in the pipeline as a functional unit, as in SIMD instructions, or attached to the system as a separate device, as in a cryptographic co-processor. Current operating systems provide little support for accelerators: whether integrated into a processor or attached as a device, they are treated as CPU or a device and given no additional consideration. However, future processors may have designs that require more management by the operating system. For example, heterogeneous processors may only provision some cores with accelerators, and IBM's wire-speed processor allows user-mode code to launch computations on a shared accelerator without kernel involvement. In such systems, the OS can improve performance by allocating accelerator resources and scheduling access to the accelerator as it does for memory and CPU time. In this paper, we discuss the challenges presented by adopting accelerators as an execution resource managed by the operating system. We also present the initial design of our system, which provides flexible control over where and when code executes and can apply power and performance policies. It presents a simple software interface that can leverage new hardware interfaces as well as sharing of specialized units in a heterogeneous system. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> SCHEDULING METHODS <s> Today's operating systems treat GPUs and other computational accelerators as if they were simple devices, with bounded and predictable response times. With accelerators assuming an increasing share of the workload on modern machines, this strategy is already problematic, and likely to become untenable soon. If the operating system is to enforce fair sharing of the machine, it must assume responsibility for accelerator scheduling and resource management. Fair, safe scheduling is a particular challenge on fast accelerators, which allow applications to avoid kernel-crossing overhead by interacting directly with the device. We propose a disengaged scheduling strategy in which the kernel intercedes between applications and the accelerator on an infrequent basis, to monitor their use of accelerator cycles and to determine which applications should be granted access over the next time interval. Our strategy assumes a well defined, narrow interface exported by the accelerator. We build upon such an interface, systematically inferred for the latest Nvidia GPUs. We construct several example schedulers, including Disengaged Timeslice with overuse control that guarantees fairness and Disengaged Fair Queueing that is effective in limiting resource idleness, but probabilistic. Both schedulers ensure fair sharing of the GPU, even among uncooperative or adversarial applications; Disengaged Fair Queueing incurs a 4% overhead on average (max 18%) compared to direct device access across our evaluation scenarios. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> SCHEDULING METHODS <s> GPUs are being increasingly adopted as compute accelerators in many domains, spanning environments from mobile systems to cloud computing. These systems are usually running multiple applications, from one or several users. However GPUs do not provide the support for resource sharing traditionally expected in these scenarios. Thus, such systems are unable to provide key multiprogrammed workload requirements, such as responsiveness, fairness or quality of service. In this paper, we propose a set of hardware extensions that allow GPUs to efficiently support multiprogrammed GPU workloads. We argue for preemptive multitasking and design two preemption mechanisms that can be used to implement GPU scheduling policies. We extend the architecture to allow concurrent execution of GPU kernels from different user processes and implement a scheduling policy that dynamically distributes the GPU cores among concurrently running kernels, according to their priorities. We extend the NVIDIA GK110 (Kepler) like GPU architecture with our proposals and evaluate them on a set of multiprogrammed workloads with up to eight concurrent processes. Our proposals improve execution time of high-priority processes by 15.6x, the average application turnaround time between 1.5x to 2x, and system fairness up to 3.4x <s> BIB003
GPU scheduling methods are required to fairly and effectively distribute GPU resources between tenants in a shared computing environment. However, GPU virtualization software faces several challenges on applying GPU scheduling polices because of the following reasons. First, GPUs normally do not provide the information of how long a GPU request occupies the GPU, which creates a task accounting problem [Dwarakinath BIB002 . Second, system software often regards GPUs as I/O devices rather than full processors and hides the methods of multiplexing the GPU in the device driver. This prevents GPU virtualization software from directly imposing certain scheduling policies on GPUs BIB001 . Finally, GPUs were non-preemptive until recently, which means a long-running GPU kernel cannot be preempted by software until it finishes. This will cause unfairness between multiple kernels and severely deteriorate the responsiveness of latency-critical kernels BIB003 . Currently, a new GPU architecture to support GPU kernel preemption has emerged in the market [NVIDIA 2016a ], but it is expected that existing GPUs will continue to suffer from this issue. In this section, we introduce representative GPU scheduling polices and mechanisms proposed in the literature to address these challenges. Table III shows a comparison of representative GPU scheduling methods in the literature. We classify the methods in terms of the scheduling discipline, support for load-balancing, and software platform. Scheduling discipline is an algorithm to distribute GPU resources among processes or virtual GPUs (vGPUs). We classify the GPU scheduling methods based on a commonly used classification [Silberschatz et al. 1998 ] as follows:
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Classification of GPU Scheduling Methods <s> This paper proposes a simple rate-based scheduling algorithm for packet-switched networks. Using a set of counters to keep track of the credits accumulated by each traffic flow, the bandwidth share allocated to each flow, and the size of the head-of-line (HOL) packets of the different flows, the algorithm decides which flow to serve next. Our proposed algorithm requires on average a smaller complexity than the most interesting alternative ones while guaranteeing comparable fairness, delay, and delay jitter bounds. To further reduce the complexity, a simplified version (CBFQ-F) of the general algorithm is also proposed for networks with fixed packet lengths, such as ATM, by relaxing the fairness bound by a negligibly small amount. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Classification of GPU Scheduling Methods <s> Cloud computing has been considered as a solution for solving enterprise application distribution and configuration challenges in the traditional software sales model. Migrating from traditional software to Cloud enables on-going revenue for software providers. However, in order to deliver hosted services to customers, SaaS companies have to either maintain their own hardware or rent it from infrastructure providers. This requirement means that SaaS providers will incur extra costs. In order to minimize the cost of resources, it is also important to satisfy a minimum service level to customers. Therefore, this paper proposes resource allocation algorithms for SaaS providers who want to minimize infrastructure cost and SLA violations. Our proposed algorithms are designed in a way to ensure that Saas providers are able to manage the dynamic change of customers, mapping customer requests to infrastructure level parameters and handling heterogeneity of Virtual Machines. We take into account the customers' Quality of Service parameters such as response time, and infrastructure level parameters such as service initiation time. This paper also presents an extensive evaluation study to analyze and demonstrate that our proposed algorithms minimize the SaaS provider's cost and the number of SLA violations in a dynamic resource sharing Cloud environment. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Classification of GPU Scheduling Methods <s> Flash-based solid-state drives (SSDs) have the potential to eliminate the I/O bottlenecks in data-intensive applications. However, the large performance discrepancy between Flash reads and writes introduces challenges for fair resource usage. Further, existing fair queueing and quanta-based I/O schedulers poorly manage the I/O anticipation for Flash I/O fairness and efficiency. Some also suppress the I/O parallelism which causes substantial performance degradation on Flash. This paper develops FIOS, a new Flash I/O scheduler that attains fairness and high efficiency at the same time. FIOS employs a fair I/O timeslice management with mechanisms for read preference, parallelism, and fairness-oriented I/O anticipation. Evaluation demonstrates that FIOS achieves substantially better fairness and efficiency compared to the Linux CFQ scheduler, the SFQ(D) fair queueing scheduler, and the Argon quanta-based scheduler on several Flash-based storage devices (including a CompactFlash card in a low-power wimpy node). In particular, FIOS reduces the worst-case slowdown by a factor of 2.3 or more when the read-only SPECweb workload runs together with the write-intensive TPC-C. <s> BIB003
• FCFS: First-come, first-served (FCFS) serves processes or vGPUs in the order that they arrive. • Round-robin: Round-robin is similar to FCFS but assigns a fixed time unit per process or vGPU, referred to as a time quantum, and then cycles through processes or vGPUs. • Priority-based: Priority-based scheduling assigns a priority rank to every process or vGPU, and the scheduler executes processes or vGPUs in order of their priority. • Fair queuing: Fair queuing is common in network and disk scheduling to attain fairness when sharing a limited resource BIB003 . Fair queuing assigns start tags to processes or vGPUs and schedules them in increasing order of start tags. A start tag denotes the accumulated usage time of a GPU. • Credit-based: Credit-based scheduling is a computationally efficient substitute to fair queuing BIB001 . The scheduler periodically distributes credits to every process or vGPU, and each process or vGPU consumes credits when it is served on the CPU for exploiting the GPU. The scheduler selects a process or vGPU with a positive credit value. • Affinity-based: This scheduling algorithm produces affinity scores for a process or vGPU to predict the performance impact when it is scheduled on a certain resource. • Service Level Agreement-(SLA) based: SLA is a contract between a cloud service provider and a tenant regarding Quality of Service (QoS) and the price. SLAbased scheduling tries to meet the service requirement when distributing GPU resources BIB002 . Load balancing indicates whether the scheduling method supports the distribution of workloads across multiple processing units. The software platform denotes whether the scheduling method is developed in a single OS or hypervisor environment. We include GPU scheduling research performed in a single OS environment because the same research can be also applicable to virtualized environments without significant modifications to the system software. The GPU scheduling methods in Table III will be discussed in depth in the following section.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service.This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort.Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead --- at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> Stony Brook University Libraries. ::: SBU Graduate School in Computer Science. ::: Lawrence Martin (Dean of Graduate School), Professor Tzi-cker Chiueh, Thesis Advisor ::: Computer Science Department, Professor Jennifer L.Wong ::: Computer Science Department. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> The graphics processing unit (GPU) is becoming a very powerful platform to accelerate graphics and data-paralle l compute-intensive applications. It significantly outperforms traditional multi-core processors in performance and ener gy effi ciency. Its application domains also range widely from embedded systems to high-performance computing systems. However, operating systems support is not adequate, lackin g models, designs, and implementation efforts of GPU resource management for multi-tasking environments. This paper identifies a GPU resource management model to provide a basis for operating systems research using GPU technology. In particular, we present design concepts for G PU resource management. A list of operating systems challenge s is also provided to highlight future directions of this rese arch domain, including specific ideas of GPU scheduling for realtime systems. Our preliminary evaluation demonstrates tha t the performance of open-source software is competitive wit h that of proprietary software, and hence operating systems r esearch can start investigating GPU resource management. <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> The Graphics Processing Unit (GPU) is now commonly used for graphics and data-parallel computing. As more and more applications tend to accelerate on the GPU in multi-tasking environments where multiple tasks access the GPU concurrently, operating systems must provide prioritization and isolation capabilities in GPU resource management, particularly in real-time setups. ::: ::: We present TimeGraph, a real-time GPU scheduler at the device-driver level for protecting important GPU workloads from performance interference. TimeGraph adopts a new event-driven model that synchronizes the GPU with the CPU to monitor GPU commands issued from the user space and control GPU resource usage in a responsive manner. TimeGraph supports two priority-based scheduling policies in order to address the tradeoff between response times and throughput introduced by the asynchronous and non-preemptive nature of GPU processing. Resource reservation mechanisms are also employed to account and enforce GPU resource usage, which prevent misbehaving tasks from exhausting GPU resources. Prediction of GPU command execution costs is further provided to enhance isolation. ::: ::: Our experiments using OpenGL graphics benchmarks demonstrate that TimeGraph maintains the frame-rates of primary GPU tasks at the desired level even in the face of extreme GPU workloads, whereas these tasks become nearly unresponsive without TimeGraph support. Our findings also include that the performance overhead imposed on TimeGraph can be limited to 4-10%, and its event-driven scheduler improves throughput by about 30 times over the existing tick-driven scheduler. <s> BIB004 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> General-purpose computing on graphics processing units, also known as GPGPU, is a burgeoning technique to enhance the computation of parallel programs. Applying this technique to real-time applications, however, requires additional support for timeliness of execution. In particular, the non-preemptive nature of GPGPU, associated with copying data to/from the device memory and launching code onto the device, needs to be managed in a timely manner. In this paper, we present a responsive GPGPU execution model (RGEM), which is a user-space runtime solution to protect the response times of high-priority GPGPU tasks from competing workload. RGEM splits a memory-copy transaction into multiple chunks so that preemption points appear at chunk boundaries. It also ensures that only the highest-priority GPGPU task launches code onto the device at any given time, to avoid performance interference caused by concurrent launches. A prototype implementation of an RGEM-based CUDA runtime engine is provided to evaluate the real-world impact of RGEM. Our experiments demonstrate that the response times of high-priority GPGPU tasks can be protected under RGEM, whereas their response times increase in an unbounded fashion without RGEM support, as the data sizes of competing workload increase. <s> BIB005 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> Recent windowing systems allow graphics applications to directly access the graphics processing unit (GPU) for fast rendering. However, application tasks that render frames on the GPU contend heavily with the windowing server that also accesses the GPU to blit the rendered frames to the screen. This resource-sharing nature of direct rendering introduces core challenges of priority inversion and temporal isolation in multi-tasking environments. In this paper, we identify and address resource-sharing problems raised in GPU-accelerated windowing systems. Specifically, we propose two protocols that enable application tasks to efficiently share the GPU resource in the X Window System. The Priority Inheritance with X server (PIX) protocol eliminates priority inversion caused in accessing the GPU, and the Reserve Inheritance with X server (RIX) protocol addresses the same problem for resource-reservation systems. Our design and implementation of these protocols highlight the fact that neither the X server nor user applications need modifications to use our solutions. Our evaluation demonstrates that multiple GPU-accelerated graphics applications running concurrently in the X Window System can be correctly prioritized and isolated by the PIX and the RIX protocols. <s> BIB006 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> GPGPU (General-purpose computing on graphics processing units) has several difficulties when used in cloud environment, such as narrow bandwidth, higher cost, and lower security, compared with computation using only CPUs. Most high performance computing applications require huge communication between nodes, and do not fit a cloud environment, since network topology and its bandwidth are not fixed and they affect the performance of the application program. However, there are some applications for which little communication is needed, such as molecular dynamics (MD) simulation with the replica exchange method (REM). For such applications, we propose DS-CUDA (Distributed-shared compute unified device architecture), a middleware to use many GPUs in a cloud environment with lower cost and higher security. It virtualizes GPUs in a cloud such that they appear to be locally installed GPUs in a client machine. Its redundant mechanism ensures reliable calculation with consumer GPUs, which reduce the cost greatly. It also enhances the security level since no data except command and data for GPUs are stored in the cloud side. REM-MD simulation with 64 GPUs showed 58 and 36 times more speed than a locally-installed GPU via InfiniBand and the Internet, respectively. <s> BIB007 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> GPGPUs (General Purpose Graphic Processing Units) provide massive computational power. However, applying GPGPU technology to real-time computing is challenging due to the non-preemptive nature of GPGPUs. Especially, a job running in a GPGPU or a data copy between a GPGPU and CPU is non-preemptive. As a result, a high priority job arriving in the middle of a low priority job execution or memory copy suffers from priority inversion. To address the problem, we present a new lightweight approach to supporting preemptive memory copies and job executions in GPGPUs. Moreover, in our approach, a GPGPU job and memory copy between a GPGPU and the hosting CPU are run concurrently to enhance the responsiveness. To show the feasibility of our approach, we have implemented a prototype system for preemptive job executions and data copies in a GPGPU. The experimental results show that our approach can bound the response times in a reliable manner. In addition, the response time of our approach is significantly shorter than those of the unmodified GPGPU runtime system that supports no preemption and an advanced GPGPU model designed to support prioritization and performance isolation via preemptive data copies. <s> BIB008 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> Today's operating systems treat GPUs and other computational accelerators as if they were simple devices, with bounded and predictable response times. With accelerators assuming an increasing share of the workload on modern machines, this strategy is already problematic, and likely to become untenable soon. If the operating system is to enforce fair sharing of the machine, it must assume responsibility for accelerator scheduling and resource management. Fair, safe scheduling is a particular challenge on fast accelerators, which allow applications to avoid kernel-crossing overhead by interacting directly with the device. We propose a disengaged scheduling strategy in which the kernel intercedes between applications and the accelerator on an infrequent basis, to monitor their use of accelerator cycles and to determine which applications should be granted access over the next time interval. Our strategy assumes a well defined, narrow interface exported by the accelerator. We build upon such an interface, systematically inferred for the latest Nvidia GPUs. We construct several example schedulers, including Disengaged Timeslice with overuse control that guarantees fairness and Disengaged Fair Queueing that is effective in limiting resource idleness, but probabilistic. Both schedulers ensure fair sharing of the GPU, even among uncooperative or adversarial applications; Disengaged Fair Queueing incurs a 4% overhead on average (max 18%) compared to direct device access across our evaluation scenarios. <s> BIB009 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> Graphics processing units (GPUs) are being widely used as co-processors in many application domains to accelerate general-purpose workloads that are computationally intensive, known as GPGPU computing. Real-time multi-tasking support is a critical requirement for many emerging GPGPU computing domains. However, due to the asynchronous and non-preemptive nature of GPU processing, in multi-tasking environments, tasks with higher priority may be blocked by lower priority tasks for a lengthy duration. This severely harms the system’s timing predictability and is a serious impediment limiting the applicability of GPGPU in many real-time and embedded systems. In this paper, we present an efficient GPGPU preemptive execution system (GPES), which combines user-level and driverlevel runtime engines to reduce the pending time of high-priority GPGPU tasks that may be blocked by long-freezing low-priority competing workloads. GPES automatically slices a long-running kernel execution into multiple subkernel launches and splits data transaction into multiple chunks at user-level, then inserts preemption points between subkernel launches and memorycopy operations at driver-level. We implement a prototype of GPES, and use real-world benchmarks and case studies for evaluation. Experimental results demonstrate that GPES is able to reduce the pending time of high-priority tasks in a multitasking environment by up to 90% over the existing GPU driver solutions, while introducing small overheads. <s> BIB010 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Algorithms for Scheduling a Single GPU <s> Accelerators, such as Graphic Processing Units (GPUs), are popular components of modern parallel systems. Their energy-efficient performance make them attractive components for modern data center nodes. However, they lack control for fair resource sharing amongst multiple users. This paper presents a runtime and Just In Time compiler that enable resource sharing control and software managed scheduling on accelerators. It is portable and transparent, requiring no modification or recompilation of existing systems or user applications. We provide an extensive evaluation of our scheme with over 40,000 different workloads on 2 platforms and we deliver fairness improvements ranging from 6.8x to 13.66x. In addition, we also deliver system throughput speedups ranging from 1.13x to 1.31x. <s> BIB011
7.2.1. Single OS Environment. GERM BIB002 ] is a GPU scheduling policy that utilizes Deficit Round Robin fair queuing , which is a network scheduler for switching packets with multiple flows. GERM maintains perprocess queues for GPU commands and allows each queue to send commands to the GPU during a predefined time quantum. A queue's deficit or surplus time compared to the time quantum will be compensated or reimbursed in the next round. This scheme is suitable for non-preemptive GPUs where a GPU request cannot be preempted and the size of each request can vary significantly. Regarding the accounting of each request, GERM cannot measure the request size exactly because GPUs generally do not interrupt the CPU after a request is processed. Therefore, it adopts heuristics to estimate how long a group of commands will occupy the GPU on average. GERM injects a special GPU command that increases a scratch register containing the number of processed requests in the GPU. By reading this register periodically, GERM infers how much time is taken for a GPU command. TimeGraph BIB003 BIB004 focuses on GPU scheduling for soft real-time multi-tasking environments. It provides two scheduling polices: Predictable-ResponseTime (PRT) and High-Throughput (HT). The PRT policy schedules GPU applications based on their priorities, so important tasks can expect predictable response times. When a group of GPU commands is issued by a process, the group is buffered in the wait queue, which resides in kernel space. TimeGraph configures the GPU to generate an interrupt to the CPU after each group's execution is completed. This is enabled by using pscnv [PathScale 2012] , an open source NVIDIA GPU driver. The PRT scheduler is triggered by each interrupt and fetches the highest-priority group from the wait queue. As the scheduler is invoked every time a group of GPU commands finishes its execution, it incurs non-negligible overhead. The HT scheduler addresses this issue by allowing the current task occupying the GPU to execute its following groups without buffering into the wait queue, when there are no other higher priority groups waiting. RGEM BIB005 ] develops a responsive GPGPU execution model for GPGPU tasks in real-time multi-tasking environments, similarly to TimeGraph BIB004 . RGEM introduces two scheduling methods: Memory-Copy Transaction scheduling and Kernel Launch scheduling. The former policy splits a large memory copy operation into several small pieces and inserts preemption points between the separate pieces. This prevents a long-running memory copy operation from occupying the GPU boundlessly, which will block the execution of high priority tasks. The latter policy follows the scheduling algorithm of the PRT scheduler in TimeGraph, except that Kernel Launch scheduling is implemented in user space. PIX BIB006 ] applies TimeGraph BIB004 ] to GPU-accelerated X Window systems. When employing the PRT scheduler in TimeGraph, PIX solves a form of the priority inversion problem where the X server task (X) with low priority can be preempted by a medium-priority task (A) on the GPU while rendering the frames of a high-priority task (B) (i.e., P B > P A > P X ). The high priority task is then blocked for a long time while the X server task deals with the frames of the medium-priority task. PIX suggests a priority inheritance protocol where the X server inherits the priority of a certain task while rendering the frames of the task. This eliminates the priority inversion problem raised by the existence of the additional X server task. Gdev ] introduces a bandwidth-aware non-preemptive device (BAND) scheduling algorithm. The authors found that the Credit scheduler BIB001 ] fails to achieve good fairness in GPU scheduling because the Credit scheduler assumes that it will run preemptive CPU workloads, whereas GPUs do not support hardware-based preemption. To address this issue, Gdev performs two heuristic modifications to the Credit scheduler. First, the BAND scheduler does not degrade the priority of a GPU task after the credit value of the task becomes zero. The BAND scheduler lowers the priority when the task's actual utilization exceeds the assigned one. This modification compensates for credit errors caused by non-preemptive executions. Second, the BAND scheduler waits for the completion of GPU kernels of a task and assigns a credit value to the task based on its GPU usage. This modification contributes to fairer resource allocations. Disengaged scheduling BIB009 ] provides a framework for scheduling GPUs and introduces three algorithms to achieve both high fairness and high utilization. The framework endeavors to employ the original NVIDIA driver and the libraries; it uses neither the API remoting approach nor a custom GPU driver to mediate GPU calls. The framework makes the GPU MMIO region of each task read-only so every GPU access can generate a page fault. The OS then intercepts and buffers GPU calls in kernel space. Disengaged scheduling offers three scheduling policies. First, the Timeslice with Overuse Control scheduling algorithm implements a standard token-based time slice policy. A token is passed to a certain task and the task can use the GPU during its time slice. The scheduler accounts for overuse by waiting for all submitted requests of the token holder to be completed at the end of each time slice. Since the GPU requests of both the token holder and other tasks generate page faults, this policy causes significant overhead due to frequent trapping to the OS. In addition, it does not implement work-conserving because the GPU can be underutilized if applications are not GPU intensive. Second, Disengaged Timeslice reduces this overhead by allowing the token holder to issue GPU commands without buffering in kernel space. However, this scheduling is still not work-conserving. Finally, Disengaged Fair Queueing executes several tasks concurrently without trapping in the common case. Only in the accounting period does the scheduler enable the trapping mechanism and is each task run sequentially. In this period, the scheduler samples the request size of each task and feeds this information to fair queuing to approximate each task's cumulative GPU usage. The scheduler then selects several tasks that have low start tags to run them without trapping until the next accounting period. The scheduler is work conserving because several tasks can exploit the GPU simultaneously; from the Kepler microarchitecture, NVIDIA allows multiple GPU kernels from different tasks to run concurrently [NVIDIA 2012 ]. 7.2.2. Virtualization Environment. GViM ] uses both simple RoundRobin scheduling and Credit scheduling of the Xen hypervisor for scheduling tasks on GPUs. As GViM operates on top of the driver level, GViM controls the rate of GPU request submissions for scheduling before the requests reach the driver. GViM implements Round Robin-(RR) and XenoCredit-(XC) based scheduling. RR selects a vGPU sequentially for every fixed time slice and monitors the vGPU's call buffer during the period. XC uses the concept of credit, which represents the allocated GPU time of each vGPU. XC processes the vGPU's call buffer for a variable time in proportion to the credit amount, which enables weighted fair-sharing between guest VMs. Pegasus ] addresses one of the challenges in GPU scheduling that a GPU virtualization framework cannot impose a scheduling policy on GPUs because the method of GPU multiplexing is hidden in the device driver. Pegasus introduces the concept of an accelerator VCPU (aVCPU) to make GPUs basic schedulable entities; the components of an aVCPU are discussed in Section 4.2. Pegasus focuses on satisfying different application requirements by providing diverse methods for scheduling GPUs. Pegasus includes first-come, first-served (FCFS), proportional fair-share (AccCredit), strict co-scheduling (CoSched), augmented credit-based scheme (AugC), and SLA feedback-based (SLAF) schedulers. AccCredit adapts the Credit scheduling concept in Xen for GPU scheduling. CoSched applies co-scheduling for barrier-rich parallel applications where a VCPU of a VM and its corresponding aVCPU frequently synchronize with each other. CoSched forces both entities (i.e., the VCPU and its corresponding aVCPU) to be executed at the same time to address synchronization bottlenecks. However, this strict co-scheduling policy can hamper fairness between multiple VMs. AugC conditionally co-schedules both entities to achieve better fairness only when the target VCPU has enough credits and can lend its credits to its corresponding aVCPU. SLAF applies feedback-based proportional fair-share scheduling. The scheduler periodically monitors Service-Level Objective (SLO) violations by the feedback controller and compensates each domain by giving extra time when a violation is detected. tackled the problem that one GPU application sometimes cannot have enough parallelism to fully utilize a modern GPU. To increase overall GPU utilization, the authors try to consolidate multiple GPU kernels from different VMs in space and time. Space sharing co-schedules kernels that do not use all streaming multiprocessors (SMs) in the GPU. Time sharing allows more than one kernel to share the same SM if the cumulative resource requirements do not exceed the capability of the SM. Because the NVIDIA Fermi-based GPU used in this research only allows a set of kernels submitted from a single process to be executed concurrently, the authors let GPU kernels from different VMs be handled by a single thread. The scheduler then finds an affinity score between every two kernels to predict the performance improvement when they are space and time shared. In addition, the scheduler calculates potential affinity scores when they are space and time shared with a different number of thread blocks and threads. The scheduler then selects n kernels to run based on the set of affinity scores. GPUvm ] employs the BAND scheduler of Gdev ] and solves a flaw of Credit scheduling. The original BAND scheduler distributes credits to each VM based on the assumption that the total utilization of all vGPUs can reach 100%. However, when the GPU scheduler is active, the GPU can temporarily become idle. This situation causes each vGPU to have unused cumulative credit, which may lead to inopportune scheduling decisions. To address this issue, GPUvm first transforms the CPU time that the GPU scheduler occupies into a credit value and then subtracts the value from the total credit value of the current vGPU. gVirt ] implements a coarse-grained QoS policy. gVirt allows GPU commands from a VM to be submitted into the guest ring buffer during the VM's time slice. After the time slice, gVirt waits for the ring buffer to be emptied by the GPU, because the GPU is non-preemptive. To minimize this wait period, gVirt develops a coarse-grained flow control method, which ensures that the total length of submitted commands is within a time slice. gVirt also implements a gang scheduling policy where dependent graphic engines are scheduled together. The graphic engines in gVirt use semaphores to synchronize accesses to shared data. To eliminate synchronization bottlenecks, gVirt schedules the related engines at the same time. VGRIS tries to address GPU scheduling issues for gaming applications deployed in cloud computing. VGRIS introduces three scheduling policies to meet different performance requirements. The SLA-aware scheduling policy just provides minimum GPU resources to each VM to satisfy its SLA requirement. The authors observe that a fair scheduling policy provides resources evenly under contention, but non-GPU-intensive applications may obtain more resources than necessary while GPU-intensive ones may not satisfy the requirement. SLA-aware scheduling slows the execution speed of fast-running applications (i.e., non-GPU-intensive applications) so other slow applications can get more chances to occupy the GPU. For this purpose, it inserts a sleep call at the end of the frame computation code of fast-running applications before the frame is displayed. However, SLA-aware scheduling may lead to low GPU utilization when only a small number of VMs is available. The Proportional-share scheduling policy addresses this issue by distributing GPU resources fairly using the priority-based scheduling policy of TimeGraph BIB004 . Finally, the Hybrid scheduling policy combines SLA-aware scheduling and Proportional-share scheduling. Hybrid scheduling first applies SLA-aware scheduling and switches to Proportionalshare scheduling if a resource surplus is available. VGASA advances VGRIS ] by providing adaptive scheduling algorithms, which employ a dynamic feedback control loop using the proportional-integral (PI) controller BIB007 . Similarly to VGRIS, VGASA provides three scheduling policies. SLA-Aware (SA) receives the frames per second (FPS) information from the feedback controller and adjusts the length of sleep time in the frame computation code to meet the predefined SLA requirement (i.e., the rate of 30 FPS). Fair SLA-Aware (FSA) dispossesses fast running applications of their GPU resources and redistributes the resources to slow running ones. Enhanced SLA-Aware (ESA) allows all VMs to have the same FPS rate under the maximum GPU utilization. ESA improves SA by dynamically calculating the SLA requirement during runtime. ESA can address a tradeoff between deploying more applications and providing smoother experiences. gScale optimizes the GPU scheduler of gVirt. gScale develops private shadow GTTs to improve the scalability issue as explained in Section 5.2. However, applying private GTTs requires page table copying upon every context switch. To mitigate this overhead, gScale does not perform context switching for idle vGPUs. Furthermore, it implements slot sharing, which divides the high graphics memory into several slots and dedicates a single slot to each vGPU. gScale's scheduler distributes busy vGPUs across the slots so each busy one can monopolize each slot. This arrangement can decrease the amount of page table entry copying. The recent NVIDIA Pascal architecture [NVIDIA 2016a] implements hardwarebased preemption to address the problem of long-running GPU kernels monopolizing the GPU. This situation can cause unfairness between multiple kernels and significantly deteriorate the system responsiveness. Existing GPU scheduling methods address this issue by either killing a long-running kernel BIB009 or providing a kernel split tool BIB008 BIB010 BIB011 . The Pascal architecture allows GPU kernels to be interrupted at instruction-level granularity by saving and restoring each GPU context to and from the GPU's DRAM.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> CHALLENGES AND FUTURE DIRECTIONS <s> Today's operating systems treat GPUs and other computational accelerators as if they were simple devices, with bounded and predictable response times. With accelerators assuming an increasing share of the workload on modern machines, this strategy is already problematic, and likely to become untenable soon. If the operating system is to enforce fair sharing of the machine, it must assume responsibility for accelerator scheduling and resource management. Fair, safe scheduling is a particular challenge on fast accelerators, which allow applications to avoid kernel-crossing overhead by interacting directly with the device. We propose a disengaged scheduling strategy in which the kernel intercedes between applications and the accelerator on an infrequent basis, to monitor their use of accelerator cycles and to determine which applications should be granted access over the next time interval. Our strategy assumes a well defined, narrow interface exported by the accelerator. We build upon such an interface, systematically inferred for the latest Nvidia GPUs. We construct several example schedulers, including Disengaged Timeslice with overuse control that guarantees fairness and Disengaged Fair Queueing that is effective in limiting resource idleness, but probabilistic. Both schedulers ensure fair sharing of the GPU, even among uncooperative or adversarial applications; Disengaged Fair Queueing incurs a 4% overhead on average (max 18%) compared to direct device access across our evaluation scenarios. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> CHALLENGES AND FUTURE DIRECTIONS <s> Accelerated architectures such as GPUs (Graphics Processing Units) and MICs (Many Integrated Cores) have been proven to increase the performance of many algorithms compared to their CPU counterparts and are widely available in local, campus-wide and national infrastructures, however, their utilization is not following the same pace as their deployment. Reasons for the underutilization lay partly on the software side with proprietary and complex interfaces for development and usage. A common API providing an extra layer to abstract the differences and specific characteristics of those architectures would deliver a far more portable interface for application developers. This cloud challenge proposal presents such an API that addresses these issues using a container-based approach. The resulting environment provides Docker-based containers for deploying accelerator libraries, such as CUDA Toolkit, OpenCL and OpenACC, onto a wide variety of different platforms and operating systems. By leveraging the container approach, we can overlay accelerator libraries onto the host without needing to be concerned about the intricacies of underlying operating system of the host. Docker therefore provides the advantage of being easily applicable on diverse architectures, virtualizing the necessary environment and including libraries as well as applications in a standardized way. The novelty of our approach is the extra layer for utilization and device discovery in this layer improving the usability and uniform development of accelerated methods with direct access to resources. <s> BIB002
Through the analysis of existing GPU virtualization techniques, we conclude that technical challenges of how to virtualize GPUs have been addressed to a significant extent. However, a number of challenges remain open in terms of performance and capabilities of GPU virtualization environments. We discuss them in this section, along with some future research directions to address the challenges. Lightweight virtualization: Linux-based containers are an emerging cloud technology that offers process-level lightweight virtualization . Containers do not require additional wrapper libraries or front/backend driver models to virtualize GPUs, because multiple containers are multiplexed by a single Linux kernel BIB002 . This feature allows containers to achieve performance that is close to that of native environments. Unfortunately, current research on GPU virtualization using containers is at an initial stage. Published work just includes performance comparisons between containers and other virtualization solutions . To utilize GPU-equipped containers in cloud computing, fair and effective GPU scheduling is required. Most GPU schedulers require API extensions or driver changes in containers to mediate GPU calls, which will impose non-negligible overhead on containers. One promising option is to adapt Disengaged scheduling BIB001 ] in the host OS, which needs neither additional wrapper libraries nor custom drivers for GPU scheduling, as explained in Section 7.2.1.
GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Security: <s> As virtual machines become pervasive users will be able to create, modify and distribute new "machines" with unprecedented ease. This flexibility provides tremendous benefits for users. Unfortunately, it can also undermine many assumptions that today's relatively static security architectures rely on about the number of hosts in a system, their mobility, connectivity, patch cycle, etc. ::: ::: We examine a variety of security problems virtual computing environments give rise to. We then discuss potential directions for changing security architectures to adapt to these demands. <s> BIB001 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Security: <s> Haswell, Intel's fourth-generation core processor architecture, delivers a range of client parts, a converged core for the client and server, and technologies used across many products. It uses an optimized version of Intel 22-nm process technology. Haswell provides enhancements in power-performance efficiency, power management, form factor and cost, core and uncore microarchitecture, and the core's instruction set. <s> BIB002 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Security: <s> Recent years have witnessed phenomenal growth in the computational capabilities and applications of GPUs. However, this trend has also led to a dramatic increase in their power consumption. This article surveys research works on analyzing and improving energy efficiency of GPUs. It also provides a classification of these techniques on the basis of their main research idea. Further, it attempts to synthesize research works that compare the energy efficiency of GPUs with other computing systems (e.g., FPGAs and CPUs). The aim of this survey is to provide researchers with knowledge of the state of the art in GPU power management and motivate them to architect highly energy-efficient GPUs of tomorrow. <s> BIB003 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Security: <s> Unified Memory is an emerging technology which is supported by CUDA 6.X. Before CUDA 6.X, the existing CUDA programming model relies on programmers to explicitly manage data between CPU and GPU and hence increases programming complexity. CUDA 6.X provides a new technology which is called as Unified Memory to provide a new programming model that defines CPU and GPU memory space as a single coherent memory (imaging as a same common address space). The system manages data access between CPU and GPU without explicit memory copy functions. This paper is to evaluate the Unified Memory technology through different applications on different GPUs to show the users how to use the Unified Memory technology of CUDA 6.X efficiently. The applications include Diffusion3D Benchmark, Parboil Benchmark Suite, and Matrix Multiplication from the CUDA SDK Samples. We changed those applications to corresponding Unified Memory versions and compare those with the original ones. We selected the NVIDIA Keller K40 and the Jetson TK1, which can represent the latest GPUs with Keller architecture and the first mobile platform of NVIDIA series with Keller GPU. This paper shows that Unified Memory versions cause 10% performance loss on average. Furthermore, we used the NVIDIA Visual Profiler to dig the reason of the performance loss by the Unified Memory technology. <s> BIB004 </s> GPU Virtualization and Scheduling Methods: A Comprehensive Survey <s> Security: <s> Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high-throughput applications. Although GPUs consume large amounts of power, their use for high-throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. This work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. As direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to power use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling are discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Last, possible directions for future research are discussed. <s> BIB005
A critical function of the hypervisor is to provide secure isolation between VMs BIB001 . To fulfill this task, para and full virtualization frameworks, including LoGV , GPUvm , gVirt , and gScale , prevent a VM from mapping the GPU address spaces of other VMs. Despite this protection mechanism, GPU virtualization frameworks remain vulnerable to denial-of-service (DoS) attacks where a malicious VM uninterruptedly submits a massive number of GPU commands to the backend and thus jeopardizes the whole system. To address this issue, gVirt resets hung GPUs and kills suspicious VMs after examining each VM's execution state. Unfortunately, this can cause a service suspension time to normal VMs. To avoid a GPU reset, a fine-grained access control mechanism is required that can delay the execution speed of a malicious VM before the VM threatens the system. Methods that adopt API remoting, including vCUDA ] and VOCL , do not implement isolation mechanisms and their security features need to be reinforced. Fused CPU-GPU chips: Conventional systems with discrete GPUs have two major disadvantages: (1) data transfer overhead over the PCIe interface, which offers a low maximum bandwidth capacity (i.e., 16GB/s), and (2) programming effort to manage the separate data address spaces of the CPU and the GPU. To address these issues, fused CPU-GPU chips furnish shared memory space between the two processors. Examples include Intel's integrated CPU-GPU BIB002 ], AMD's HSA architecture , and NVIDIA's unified memory coupled with NVLink ]. These new architectures can boost the performance of big data applications that require a significant communication volume between the two processors. gVirt ] (Section 5.2) implemented full virtualization for Intel's GPUs, while (Section 5.1) developed a para virtualization solution for AMD's fused chips. However, these frameworks only focus on utilizing GPUs and need to adopt sophisticated scheduling algorithms that can utilize both processors by partitioning and load-balancing workloads differently for fused CPU-GPU architectures. explored NVIDIA's unified memory to simplify memory management in GPU virtualization. However, the sole use of unified memory incurs non-negligible performance degradation in data-intensive applications BIB004 because NVIDIA maintains its discrete GPU design and automatically migrates data between the host and the GPU. NVLink enables a high-bandwidth path between the GPU and the CPU (achieving between 80 and 200GB/s of bandwidth). A combination of NVIDIA's unified memory and NVLink is required to achieve high performance for data-intensive applications in GPU virtualization. Power efficiency: Energy efficiency is currently a high research priority for GPU platforms BIB003 BIB005 ]. Compared to a significant volume of research studying GPU power in non-virtualized environments, there is little work related to power and energy consumption studies in virtualized environments with GPUs. One example is pVOCL , which improves the energy efficiency of a remote GPU cluster by controlling peak power consumption between GPUs. Besides controlling power consumption of GPUs remotely, power efficiency is also required at the host side. Runtime systems that monitor different GPU usage patterns among VMs and dynamically adjust the GPU's power state according to the workload are an open area for further research. Space sharing: Recent GPUs allow multiple processes or VMs to launch GPU kernels on a single GPU simultaneously [NVIDIA 2012 ]. This space-multiplexing approach can improve GPU utilization by fully exploiting SMs with multiple kernels. However, most GPU scheduling methods are based on time-multiplexing where GPU kernels from different VMs run in sequence on a GPU, which can lead to underutilization. A combination of the two approaches is required to achieve both high GPU utilization and fairness in GPU scheduling.
Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Introduction <s> Pervasive systems must offer an open, extensible, and evolving portfolio of services which integrate sensor data from a diverse range of sources. The core challenge is to provide appropriate and consistent adaptive behaviours for these services in the face of huge volumes of sensor data exhibiting varying degrees of precision, accuracy and dynamism. Situation identification is an enabling technology that resolves noisy sensor data and abstracts it into higher-level concepts that are interesting to applications. We provide a comprehensive analysis of the nature and characteristics of situations, discuss the complexities of situation identification, and review the techniques that are most popularly used in modelling and inferring situations from sensor data. We compare and contrast these techniques, and conclude by identifying some of the open research opportunities in the area. <s> BIB001 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Introduction <s> It is essential for environments that aim at helping people in their daily life that they have some sort of Ambient Intelligence. Learning the preferences and habits of users then becomes an important step in allowing a system to provide such personalized services. Thus far, the exploration of these issues by the scientific community has not been extensive, but interest in the area is growing. Ambient Intelligence environments have special characteristics that have to be taken into account during the learning process. We identify these characteristics and use them to highlight the strengths and weaknesses of developments so far, providing direction to encourage further development in this specific area of Ambient Intelligence. <s> BIB002 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Introduction <s> Commercial home automation systems are becoming increasingly common, affording the opportunity to study technology-augmented homes in real world contexts. In order to understand how these technologies are being integrated into homes and their effects on inhabitants, we conducted a qualitative study involving smart home professionals who provide such technology, people currently in the process of planning or building smart homes, and people currently living in smart homes. We identified motivations for bringing smart technology into homes, and the phases involved in making a home smart. We also explored the varied roles of the smart home inhabitants that emerged during these phases, and several of the challenges and benefits that arise while living in a smart home. Based on these findings we propose open areas and new directions for smart home research. <s> BIB003
The progress of information and communication technologies has many faces; while computing speed, reliability and level of miniaturization of electronic devices increase year after year, their costs decrease. This allows a widespread adoption of embedded systems (e.g., appliances, sensors, actuators) and of powerful computing devices (e.g., laptops, smartphones), thus, turning pervasive (or ubiquitous) computing into reality. Pervasive computing embodies a vision of computers seamlessly integrating into everyday life, responding to information provided by sensors in the environment, with little or no direct instruction from users BIB001 . At the same time, connecting all these computing devices together, as networked artefacts, using local and global network infrastructures has become easy. The rise of applications that exploit these technologies represents a major characteristic of the Internet-of-Things (IoT) . Smart spaces represent an emerging class of IoT-based applications. Smart homes and offices are representative examples where pervasive computing could take advantage of ambient intelligence (AmI) more easily than in other scenarios where artificial intelligence-AI problems soon become intractable . A study about the current level of adoption of commercial smart home systems is provided in BIB003 . This study reveals how people understanding of the term "smart" has a more general meaning than what we presented here as AmI; in particular, it also includes non-technological aspects such as the spatial layout of the house. Additionally, an automated behavior is considered as smart, especially from people without a technical background, only if it performs a task quicker than the user could do by himself. The research also reveals that interest in smart home systems is subject to a virtuous circle such that people experiencing benefits from their services feel the need of upgrading them. Figure 1 depicts the closed loops that characterize a running smart space BIB002 . The main closed loop, depicted using solid arrows and shapes, shows how the knowledge of environment dynamics and of users behaviors and preferences is employed to interpret sensors output in order to perform appropriate actions on the environment. Sensor data is first analyzed to extract the current context, which is an internal abstraction of the state of the environment from the point of view of the AmI system. The extracted context is then employed to make decisions on the actions to perform on the controlled space. Actions related to these decisions modify the environment (both physical and digital) by means of actuators of different forms.
Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> We characterize situations as constraints on sensor readings expressed in rules. We also introduce an extension of Prolog which we call LogicCAP for programming context-aware applications, where situations are first-class entities. The operator "in-situation" in the language captures a common form of reasoning in context-aware applications, which is to ask if an entity is in a given situation. We show the usefulness of our approach via programming idioms, including defining relations among situations and integration with the Web. <s> BIB001 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Technological advancements have and will revolutionise the support offered to persons in their home environment. As the population continues to grow and in addition the percentage of elderly within the population increases we now face the challenge of improving individual autonomy and quality of life. Smart home technology offering intelligent appliances and remote alarm-based monitoring are moving close towards addressing these issues. ::: ::: To date the research efforts on smart home technology have focused on communications and intelligent user interfaces. The trends in these areas must now, however, focus on the analysis on the data which is generated from the devices within the house as a means of producing 'profiles' of the users and providing intelligent interaction to support their daily activities. A key element in the implementation of these systems is the capability to handle time-related concepts. Here we report about one experience using Active Databases in connection with temporal reasoning in the form of complex event detection to accommodate prevention of hazardous situations. <s> BIB002 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> We are developing a personal activity recognition system that is practical, reliable, and can be incorporated into a variety of health-care related applications ranging from personal fitness to elder care. To make our system appealing and useful, we require it to have the following properties: (i) data only from a single body location needed, and it is not required to be from the same point for every user; (ii) should work out of the box across individuals, with personalization only enhancing its recognition abilities; and (iii) should be effective even with a cost-sensitive subset of the sensors and data features. In this paper, we present an approach to building a system that exhibits these properties and provide evidence based on data for 8 different activities collected from 12 different subjects. Our results indicate that the system has an accuracy rate of approximately 90% while meeting our requirements. We are now developing a fully embedded version of our system based on a cell-phone platform augmented with a Bluetooth-connected sensor board. <s> BIB003 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Pervasive computing is by its nature open and extensible, and must integrate the information from a diverse range of sources. This leads to a problem of information exchange, so sub-systems must agree on shared representations. Ontologies potentially provide a well-founded mechanism for the representation and exchange of such structured information. A number of ontologies have been developed specifically for use in pervasive computing, none of which appears to cover adequately the space of concerns applicable to application designers. We compare and contrast the most popular ontologies, evaluating them against the system challenges generally recognized within the pervasive computing community. We identify a number of deficiencies that must be addressed in order to apply the ontological techniques successfully to next-generation pervasive systems. <s> BIB004 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its annotation is described and made available to the community. Through a number of experiments we show how the hidden Markov model and conditional random fields perform in recognizing activities. We achieve a timeslice accuracy of 95.6% and a class accuracy of 79.4%. <s> BIB005 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Advancements in supporting fields have increased the likelihood that smart-home technologies will become part of our everyday environments. However, many of these technologies are brittle and do not adapt to the user's explicit or implicit wishes. Here, we introduce CASAS, an adaptive smart-home system that utilizes machine learning techniques to discover patterns in resident's daily activities and to generate automation polices that mimic these patterns. Our approach does not make any assumptions about the activity structure or other underlying model parameters but leaves it completely to our algorithms to discover the smart-home resident's patterns. Another important aspect of CASAS is that it can adapt to changes in the discovered patterns based on the resident implicit and explicit feedback and can automatically update its model to reflect the changes. In this paper, we provide a description of the CASAS technologies and the results of experiments performed on both synthetic and real-world data. <s> BIB006 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> In the last years, techniques for activity recognition have attracted increasing attention. Among many applications, a special interest is in the pervasive e-Health domain where automatic activity recognition is used in rehabilitation systems, chronic disease management, monitoring of the elderly, as well as in personal well being applications. Research in this field has mainly adopted techniques based on supervised learning algorithms to recognize activities based on contextual conditions (e.g., location, surrounding environment, used objects) and data retrieved from body-worn sensors. Since these systems rely on a sufficiently large amount of training data which is hard to collect, scalability with respect to the number of considered activities and contextual data is a major issue. In this paper, we propose the use of ontologies and ontological reasoning combined with statistical inferencing to address this problem. Our technique relies on the use of semantic relationships that express the feasibility of performing a given activity in a given context. The proposed technique neither increases the obtrusiveness of the statistical activity recognition system, nor introduces significant computational overhead to real-time activity recognition. The results of extensive experiments with data collected from sensors worn by a group of volunteers performing activities both indoor and outdoor show the superiority of the combined technique with respect to a solely statistical approach. To the best of our knowledge, this is the first work that systematically investigates the integration of statistical and ontological reasoning for activity recognition. <s> BIB007 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> This paper addresses the problem of learning situation models for providing context-aware services. Context for modeling human behavior in a smart environment is represented by a situation model describing environment, users, and their activities. A framework for acquiring and evolving different layers of a situation model in a smart environment is proposed. Different learning methods are presented as part of this framework: role detection per entity, unsupervised extraction of situations from multimodal data, supervised learning of situation representations, and evolution of a predefined situation model with feedback. The situation model serves as frame and support for the different methods, permitting to stay in an intuitive declarative framework. The proposed methods have been integrated into a whole system for smart home environment. The implementation is detailed, and two evaluations are conducted in the smart home environment. The obtained results validate the proposed approach. <s> BIB008 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> We explore a dense sensing approach that uses RFID sensor network technology to recognize human activities. In our setting, everyday objects are instrumented with UHF RFID tags called WISPs that are equipped with accelerometers. RFID readers detect when the objects are used by examining this sensor data, and daily activities are then inferred from the traces of object use via a Hidden Markov Model. In a study of 10 participants performing 14 activities in a model apartment, our approach yielded recognition rates with precision and recall both in the 90% range. This compares well to recognition with a more intrusive short-range RFID bracelet that detects objects in the proximity of the user; this approach saw roughly 95% precision and 60% recall in the same study. We conclude that RFID sensor networks are a promising approach for indoor activity monitoring. <s> BIB009 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Advances in technology have provided the ability to equip the home environment with a layer of technology to provide a truly 'Smart Home'. These homes offer improved living conditions and levels of independence for the population who require support with both physical and cognitive functions. At the core of the Smart Home is a collection of sensing technology which is used to monitor the behaviour of the inhabitant and their interactions with the environment. A variety of different sensors measuring light, sound, contact and motion provide sufficient multi-dimensional information about the inhabitant to support the inference of activity determination. A problem which impinges upon the success of any information analysis is the fact that sensors may not always provide reliable information due to either faults, operational tolerance levels or corrupted data. In this paper we address the fusion process of contextual information derived from uncertain sensor data. Based on a series of information handling techniques, most notably the Dempster-Shafer theory of evidence and the Equally Weighted Sum operator, evidential contextual information is represented, analysed and merged to achieve a consensus in automatically inferring activities of daily living for inhabitants in Smart Homes. Within the paper we introduce the framework within which uncertainty can be managed and demonstrate the effects that the number of sensors in conjunction with the reliability level of each sensor can have on the overall decision making process. <s> BIB010 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> The pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. A primary challenge that needs to be tackled to meet this need is the ability to recognize and track functional activities that people perform in their own homes and everyday settings. In this paper, we look at approaches to perform real-time recognition of Activities of Daily Living. We enhance other related research efforts to develop approaches that are effective when activities are interrupted and interleaved. To evaluate the accuracy of our recognition algorithms we assess them using real data collected from participants performing activities in our on-campus smart apartment testbed. <s> BIB011 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Intelligent Environments depend on their capability to understand and anticipate user’s habits and needs. Therefore, learning user’s common behaviours becomes an important step towards allowing an environment to provide such personalized services. Due to the complexity of the entire learning system, this paper will focus on the automatic discovering of models of user’s behaviours. Discovering the models means to discover the order of such actions, representing user’s behaviours as sequences of actions. <s> BIB012 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> By 2050, about one third of the French population will be over 65. Our laboratory's current research focuses on the monitoring of elderly people at home, to detect a loss of autonomy as early as possible. Our aim is to quantify criteria such as the international activities of daily living (ADL) or the French Autonomie Gerontologie Groupes Iso-Ressources (AGGIR) scales, by automatically classifying the different ADL performed by the subject during the day. A Health Smart Home is used for this. Our Health Smart Home includes, in a real flat, infrared presence sensors (location), door contacts (to control the use of some facilities), temperature and hygrometry sensor in the bathroom, and microphones (sound classification and speech recognition). A wearable kinematic sensor also informs postural transitions (using pattern recognition) and walk periods (frequency analysis). This data collected from the various sensors are then used to classify each temporal frame into one of the ADL that was previously acquired (seven activities: hygiene, toilet use, eating, resting, sleeping, communication, and dressing/undressing). This is done using support vector machines. We performed a 1-h experimentation with 13 young and healthy subjects to determine the models of the different activities, and then we tested the classification algorithm (cross validation) with real data. <s> BIB013 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> The machine learning and pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. In order to monitor the functional health of smart home residents, we need to design technologies that recognize and track activities that people normally perform as part of their daily routines. Although approaches do exist for recognizing activities, the approaches are applied to activities that have been preselected and for which labeled training data are available. In contrast, we introduce an automated approach to activity tracking that identifies frequent activities that naturally occur in an individual's routine. With this capability, we can then track the occurrence of regular activities to monitor functional health and to detect changes in an individual's patterns and lifestyle. In this paper, we describe our activity mining and tracking approach, and validate our algorithms on data collected in physical smart environments. <s> BIB014 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> This paper considers scalable and unobtrusive activity recognition using on-body sensing for context awareness in wearable computing. Common methods for activity recognition rely on supervised learning requiring substantial amounts of labeled training data. Obtaining accurate and detailed annotations of activities is challenging, preventing the applicability of these approaches in real-world settings. This paper proposes new annotation strategies that substantially reduce the required amount of annotation. We explore two learning schemes for activity recognition that effectively leverage such sparsely labeled data together with more easily obtainable unlabeled data. Experimental results on two public data sets indicate that both approaches obtain results close to fully supervised techniques. The proposed methods are robust to the presence of erroneous labels occurring in real-world annotation data. <s> BIB015 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Smart home activity recognition systems can learn generalized models for common activities that span multiple environment settings and resident types. <s> BIB016 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> This paper introduces a knowledge-driven approach to real-time, continuous activity recognition based on multisensor data streams in smart homes. The approach goes beyond the traditional data-centric methods for activity recognition in three ways. First, it makes extensive use of domain knowledge in the life cycle of activity recognition. Second, it uses ontologies for explicit context and activity modeling and representation. Third and finally, it exploits semantic reasoning and classification for activity inferencing, thus enabling both coarse-grained and fine-grained activity recognition. In this paper, we analyze the characteristics of smart homes and Activities of Daily Living (ADL) upon which we built both context and ADL ontologies. We present a generic system architecture for the proposed knowledge-driven approach and describe the underlying ontology-based recognition process. Special emphasis is placed on semantic subsumption reasoning algorithms for activity recognition. The proposed approach has been implemented in a function-rich software system, which was deployed in a smart home research laboratory. We evaluated the proposed approach and the developed system through extensive experiments involving a number of various ADL use scenarios. An average activity recognition rate of 94.44 percent was achieved and the average recognition runtime per recognition operation was measured as 2.5 seconds. <s> BIB017 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Many intelligent systems that focus on the needs of a human require information about the activities that are being performed by the human. At the core of this capability is activity recognition. Activity recognition techniques have become robust but rarely scale to handle more than a few activities. They also rarely learn from more than one smart home data set because of inherent differences between labeling techniques. In this paper we investigate a data-driven approach to creating an activity taxonomy from sensor data found in disparate smart home datasets. We investigate how the resulting taxonomy can help analyze the relationship between classes of activities. We also analyze how the taxonomy can be used to scale activity recognition to a large number of activity classes and training datasets. We describe our approach and evaluate it on 34 smart home datasets. The results of the evaluation indicate that the hierarchical modeling can reduce training time while maintaining accuracy of the learned model. <s> BIB018 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Automated monitoring and the recognition of activities of daily living (ADLs) is a key challenge in ambient-assisted living (AAL) for the assistance of the elderly. Within this context, a formal approach may provide a means to fill the gap between the low-level observations acquired by sensing devices and the high-level concepts that are required for the recognition of human activities. We describe a system named ARA (Automated Recognizer of ADLs) that exploits propositional temporal logic and model checking to support automated real-time recognition of ADLs within a smart environment. The logic is shown to be expressive enough for the specification of realistic patterns of ADLs in terms of basic actions detected by a sensorized environment. The online model checking engine is shown to be capable of processing a stream of detected actions in real time. The effectiveness and viability of the approach are evaluated within the context of a smart kitchen, where different types of ADLs are repeatedly performed. <s> BIB019 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> A major challenge of ubiquitous computing resides in the acquisition and modelling of rich and heterogeneous context data, among which, ongoing human activities at different degrees of granularity. In a previous work, we advocated the use of probabilistic description logics (DLs) in a multilevel activity recognition framework. In this paper, we present an in-depth study of activity modeling and reasoning within that framework, as well as an experimental evaluation with a large real-world dataset. Our solution allows us to cope with the uncertain nature of ontological descriptions of activities, while exploiting the expressive power and inference tools of the OWL 2 language. Targeting a large dataset of real human activities, we developed a probabilistic ontology modeling nearly 150 activities and actions of daily living. Experiments with a prototype implementation of our framework confirm the viability of our solution. <s> BIB020 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Business Process Management (BPM) is the art and science of how work should be performed in an organization in order to ensure consistent outputs and to take advantage of improvement opportunities, e.g. reducing costs, execution times or error rates. Importantly, BPM is not about improving the way individual activities are performed, but rather about managing entire chains of events, activities and decisions that ultimately produce added value for an organization and its customers. This textbook encompasses the entire BPM lifecycle, from process identification to process monitoring, covering along the way process modelling, analysis, redesign and automation. Concepts, methods and tools from business management, computer science and industrial engineering are blended into one comprehensive and inter-disciplinary approach. The presentation is illustrated using the BPMN industry standard defined by the Object Management Group and widely endorsed by practitioners and vendors worldwide. In addition to explaining the relevant conceptual background, the book provides dozens of examples, more than 100 hands-on exercises many with solutions as well as numerous suggestions for further reading. The textbook is the result of many years of combined teaching experience of the authors, both at the undergraduate and graduate levels as well as in the context of professional training. Students and professionals from both business management and computer science will benefit from the step-by-step style of the textbook and its focus on fundamental concepts and proven methods. Lecturers will appreciate the class-tested format and the additional teaching material available on the accompanying website fundamentals-of-bpm.org. <s> BIB021 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Activity recognition has received increasing attention from the machine learning community. Of particular interest is the ability to recognize activities in real time from streaming data, but this presents a number of challenges not faced by traditional offline approaches. Among these challenges is handling the large amount of data that does not belong to a predefined class. In this paper, we describe a method by which activity discovery can be used to identify behavioral patterns in observational data. Discovering patterns in the data that does not belong to a predefined class aids in understanding this data and segmenting it into learnable classes. We demonstrate that activity discovery not only sheds light on behavioral patterns, but it can also boost the performance of recognition algorithms. We introduce this partnership between activity discovery and online activity recognition in the context of the CASAS smart home project and validate our approach using CASAS data sets. <s> BIB022 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> The increasing aging population in the coming decades will result in many complications for society and in particular for the healthcare system due to the shortage of healthcare professionals and healthcare facilities. To remedy this problem, researchers have pursued developing remote monitoring systems and assisted living technologies by utilizing recent advances in sensor and networking technology, as well as in the data mining and machine learning fields. In this article, we report on our fully automated approach for discovering and monitoring patterns of daily activities. Discovering and tracking patterns of daily activities can provide unprecedented opportunities for health monitoring and assisted living applications, especially for older adults and individuals with mental disabilities. Previous approaches usually rely on preselected activities or labeled data to track and monitor daily activities. In this article, we present a fully automated approach by discovering natural activity patterns and their variations in real-life data. We will show how our activity discovery component can be integrated with an activity recognition component to track and monitor various daily activity patterns. We also provide an activity visualization component to allow caregivers to visually observe and examine the activity patterns using a user-friendly interface. We validate our algorithms using real-life data obtained from two apartments during a three-month period. <s> BIB023 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> P(X|H)P(H) P(X) <s> Recognition of activities of daily living (ADLs) is an enabling technology for several ubiquitous computing applications. In this field, most activity recognition systems rely on supervised learning methods to extract activity models from labeled datasets. An inherent problem of that approach consists in the acquisition of comprehensive activity datasets, which is expensive and may violate individuals' privacy. The problem is particularly challenging when focusing on complex ADLs, which are characterized by large intra- and inter-personal variability of execution. In this paper, we propose an unsupervised method to recognize complex ADLs exploiting the semantics of activities, context data, and sensing devices. Through ontological reasoning, we derive semantic correlations among activities and sensor events. By matching observed sensor events with semantic correlations, a statistical reasoner formulates initial hypotheses about the occurred activities. Those hypotheses are refined through probabilistic reasoning, exploiting semantic constraints derived from the ontology. Extensive experiments with real-world datasets show that the accuracy of our unsupervised method is comparable to the one of state of the art supervised approaches. <s> BIB024
, where H denotes the hypothesis (e.g., a certain activity is happening) and X represents the set of evidences (i.e., the current value of context objects). As calculating P(X|H) can be very expensive, different assumptions can be made to simplify the computation. For example, naïve Bayes (NB) is a simple classification model, which supposes the n single evidences composing X independent (that the occurrence of one does not affect the probability of the others) given the situational hypothesis; this assumption can be formalized as P(X|H) = ∏ n k=1 P(x k |H). The inference process within the naïve Bayes assumption chooses the situation with the maximum a posteriori (MAP) probability. Hidden Markov Models (HMMs) represent one of the most widely adopted formalism to model the transitions between different states of the environment or humans. Here hidden states represent situations and/or activities to be recognized, whereas observable states represent sensor measurements. HMMs are a statistical model where a system being modeled is assumed to be a Markov chain, which is a sequence of events. A HMM is composed of a finite set of hidden states (e.g., s t−1 , s t , and s t+1 ) and observations (e.g., o t−1 , o t , and o t+1 ) that are generated from states. HMM is built on three assumptions: (i) each state depends only on its immediate predecessor; (ii) each observation variable only depends on the current state; and (iii) observations are independent from each other. In a HMM, there are three types of probability distributions: (i) prior probabilities over initial state p(s 0 ); (ii) state transition probabilities p(s t |s t−1 ); and (iii) observation emission probabilities p(o t |s t ). A drawback of using a standard HMM is its lack of hierarchical modeling for representing human activities. To deal with this issue, several other HMM alternatives have been proposed, such as hierarchical and abstract HMMs. In a hierarchical HMM, each of the hidden states can be considered as an autonomous probabilistic model on its own; that is, each hidden state is also a hierarchical HMM. HMMs generally assume that all observations are independent, which could possibly miss long-term trends and complex relationships. Conditional Random Fields-CRFs, on the other hand, eliminate the independence assumptions by modeling the conditional probability of a particular sequence of hypothesis, Y, given a sequence of observations, X; succinctly, CRFs model P(Y|X). Modeling the conditional probability of the label sequence rather than the joint probability of both the labels and observations P(X, Y), as done by HMMs, allows CRFs to incorporate complex features of the Figure 2 . Examples of HMM and CRF models. Ellipses represent states (i.e., activities). Rectangles represent sensors. Arrows between states are state transition probabilities (i.e., the probability of moving from a state to another), whereas those from states to sensors are emission probabilities (i.e., the probability that in a specific state a sensor has a specific value). (a) HMM model example. Picture inspired by CASAS-HMM BIB011 and CASAS-HAM BIB006 . (b) CRF model example. Picture inspired by KROS-CRF BIB005 . Another statistical tool often employed is represented by Markov Chains (MCs), which are based on the assumption that the probability of an event is only conditional to the previous event. Even if they are very effective for some applications like capacity planning, in the smart spaces context, they are quite limited because they deal with deterministic transactions and modeling an intelligent environment with this formalism results in a very complicated model. Support Vector Machines (SVMs) allow to classify both linear and non-linear data. A SVM uses a non-linear mapping to transform the original training data into an higher dimension. Within this new dimension, it searches for the linear optimal separating hyperplane that separates the training data of one class from another. With an appropriate non-linear mapping to a sufficiently high dimension, data from two classes can always be separated. SVMs are good at handling large feature spaces since they employ overfitting protection, which does not necessarily depend on the number of features. Binary Classifiers are built to distinguish activities. Due to their characteristics, SVMs are better in generating other kind of models with a machine learning approach than modeling directly the smart environment. For instance in BIB018 authors uses them combined with Naive Bayes Classifiers to learn the activity model built on hierarchical taxonomy formalism shown in Figure 3 . Artificial Neural Networks (ANNs) are a sub-symbolic technique, originally inspired by biological neuron networks. They can automatically learn complex mappings and extract a non-linear combination of features. A neural network is composed of many artificial neurons that are linked together according to a specific network architecture. A neural classifier consists of an input layer, a hidden layer, and an output layer. Mappings between input and output features are represented in the composition of activation functions f at a hidden layer, which can be learned through a training process performed using gradient descent optimization methods or resilient backprogagation algorithms. Some techniques stem from data mining methods for market basket analysis (e.g., the Apriori algorithm ), which apply a windowing mechanism in order to transform the event/sensor log into what is called a database of transactions. Let I = {i 1 , . . . , i n E } be a set of binary variables corresponding to sensor event types. A transaction is an assignment that binds a value to each of the variables in I, where the values 0 and 1 respectively denote the fact that a certain event happened or not during the considered window. A database of transactions T is an (usually ordered) sequence of transactions each having a, possibly empty, set of properties (e.g., a timestamp). An item is an assignment of the kind i k = {0, 1}. An itemset is an assignment covering a proper subset of the variables in I. An itemset C has support Supp T (C) in the database of transactions T if a fraction of Supp T (C) of transactions in the database contain C. The techniques following this strategy turn the input log into a database of transactions, each of them corresponding to a window. Given two different databases of transactions T 1 and T 2 , the growth rate of an itemset C from T 1 to T 2 is defined as . Emerging patterns (EP) are those itemsets showing a growth rate greater than a certain threshold ρ. The ratio behind this definition is that an itemset that has high support in its target class (database) and low support in the contrasting class, can be seen as a strong signal, in order to discover the class of a test instance containing it. Market basket analysis is a special case of affinity analysis that discovers co-occurrence relationships among purchased items inside a single or more transactions. Initial approaches to the development of context-aware systems able to recognize situations were based on predicate logic. Loke BIB001 introduced a PROLOG extension called LogicCAP; here the "in-situation" operator captures a common form of reasoning in context-aware applications, which is to ask if an entity E is in a given situation S (denoted as S* > E). In particular, a situation is defined as a set of constraints imposed on output or readings that can be returned by sensors, i.e., if S is the current situation, we expect the sensors to return values satisfying some constraints associated with S. LogicCAP rules use backward chaining like PROLOG, but also utilizes forward chaining in determining situations, i.e., a mix of backward and forward chaining is used in evaluating LogicCAP programs. The work introduces different reasoning techniques with situations including selecting the best action to perform in a certain situation, understanding what situation a certain entity is in (or the most likely) and defining relationships between situations. There are many approaches borrowed from information technology areas, adapted to smart environments. For instance in BIB019 , the authors use temporal logic and model checking to perform activities modeling and recognition. The system proposed is called ARA. A graphical representation of a model example adopted by this approach is shown in Figure 4 . It evidences how the activities are composed by the time correlated states between consecutive actions. Ontologies (denoted as ONTO) represent the last evolution of logic-based approaches and have increasingly gained attention as a generic, formal and explicit way to "capture and specify the domain knowledge with its intrinsic semantics through consensual terminology and formal axioms and constraints" BIB004 . They provide a formal way to represent sensor data, context, and situations into well-structured terminologies, which make them understandable, shareable, and reusable by both humans and machines. A considerable amount of knowledge engineering effort is expected in constructing the knowledge base, while the inference is well supported by mature algorithms and rule engines. Some examples of using ontologies in identifying situations are given by BIB007 (later evolved in BIB020 BIB024 ). Instead of using ontologies to infer activities, they use ontologies to validate the result inferred from statistical techniques. The way an AmI system makes decisions on the actions can be compared to decision-making in AI agents. As an example, reflex agents with state, as introduced in , take as input the current state of the world and a set of Condition-Action rules to choose the action to be performed. Similarly, Augusto BIB002 introduces the concept of Active DataBase (ADB) composed by Event-Condition-Action (ECA) rules. An ECA rule basically has the form "ON event IF condition THEN action", where conditions can take into account time. The first attempts to apply techniques taken from the business process management-BPM BIB021 area were the employment of workflow specifications to anticipate user actions. A workflow is composed by a set of tasks related by qualitative and/or quantitative time relationships. Authors in present a survey of techniques for temporal calculus (i.e., Allen's Temporal Logic and Point Algebra) and spatial calculus aiming at decision-making. The SPUBS system BIB012 automatically retrieves these workflows from sensor data. Table 3 shows, for each surveyed paper, information about RQ-B1.1 (Formalism), RQ-B1.2 (Readability Level) and RQ-B1.3 (Granularity). BIB022 BIB014 BIB023 HMM M Activity CASAS-HMM BIB011 Activity CASAS-HMMNBCRF BIB016 Activity KROS-CRF BIB005 Activity REIG-SITUATION BIB008 Situation LES-PHI BIB003 Activity BUE-WISPS BIB009 Activity BIB015 Activity FLEURY-MCSVM BIB013 Activity CHEN-ONT BIB017 ONTO H Activity RIB-PROB BIB020 BIB024 Action/Activity NUG-EVFUS BIB010 Action
Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Model Construction <s> Pervasive systems must offer an open, extensible, and evolving portfolio of services which integrate sensor data from a diverse range of sources. The core challenge is to provide appropriate and consistent adaptive behaviours for these services in the face of huge volumes of sensor data exhibiting varying degrees of precision, accuracy and dynamism. Situation identification is an enabling technology that resolves noisy sensor data and abstracts it into higher-level concepts that are interesting to applications. We provide a comprehensive analysis of the nature and characteristics of situations, discuss the complexities of situation identification, and review the techniques that are most popularly used in modelling and inferring situations from sensor data. We compare and contrast these techniques, and conclude by identifying some of the open research opportunities in the area. <s> BIB001 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Model Construction <s> Advancements in supporting fields have increased the likelihood that smart-home technologies will become part of our everyday environments. However, many of these technologies are brittle and do not adapt to the user's explicit or implicit wishes. Here, we introduce CASAS, an adaptive smart-home system that utilizes machine learning techniques to discover patterns in resident's daily activities and to generate automation polices that mimic these patterns. Our approach does not make any assumptions about the activity structure or other underlying model parameters but leaves it completely to our algorithms to discover the smart-home resident's patterns. Another important aspect of CASAS is that it can adapt to changes in the discovered patterns based on the resident implicit and explicit feedback and can automatically update its model to reflect the changes. In this paper, we provide a description of the CASAS technologies and the results of experiments performed on both synthetic and real-world data. <s> BIB002 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Model Construction <s> Many real-world applications that focus on addressing needs of a human, require information about the activities being performed by the human in real-time. While advances in pervasive computing have led to the development of wireless and non-intrusive sensors that can capture the necessary activity information, current activity recognition approaches have so far experimented on either a scripted or pre-segmented sequence of sensor events related to activities. In this paper we propose and evaluate a sliding window based approach to perform activity recognition in an on line or streaming fashion; recognizing activities as and when new sensor events are recorded. To account for the fact that different activities can be best characterized by different window lengths of sensor events, we incorporate the time decay and mutual information based weighting of sensor events within a window. Additional contextual information in the form of the previous activity and the activity of the previous window is also appended to the feature describing a sensor window. The experiments conducted to evaluate these techniques on real-world smart home datasets suggests that combining mutual information based weighting of sensor events and adding past contextual information to the feature leads to best performance for streaming activity recognition. <s> BIB003
Modeling formalisms in the literature can be roughly divided into specification-based and learning-based BIB001 . Research in the field of AmI started when few kinds of sensors were available and the relationships between sensor data and underlying phenomena were easy to establish. Specification-based approaches represent hand-made expert knowledge in logic rules and apply reasoning engines to infer conclusions and to make decisions from sensor data. These techniques evolved in the last years in order to take into account uncertainty. The growing availability of different kinds of sensors made hand-made models impractical to be produced. In order to solve this problem, learning-based methods employ techniques from machine learning and data mining. Specification-based models are usually more human-readable (even though a basic experience with formal logic languages is required), but creating them is very expensive in terms of human resources. Most learning-based models are instead represented using mathematical and statistical formalisms (e.g., HMMs), which make them difficult to be revised by experts and understood by final users. These motivations are at the basis of the research of human-readable automatically inferred formalisms. Learning-based techniques can be divided into supervised and unsupervised techniques. The former expect the input to be previously labeled according to the required output function, hence, they require a big effort for organizing input data in terms of training examples, even though active learning can be employed to ease this task. Unsupervised techniques (or weakly supervised ones, i.e., those where only a part of the dataset is labeled) can be used to face this challenge but a limited number of works is available in the literature. Unsupervised techniques for AmI knowledge modeling can be useful for other two reasons. Firstly, as stated in the introduction, sometimes knowledge should not be considered as a static resource; instead it should be updated at runtime without a direct intervention of the users BIB002 , hence, updating techniques should rely on labeling of sensor data as little as possible. Moreover, unsupervised techniques may also result useful in supporting passive users, such as guests, that do not participate in the configuration of the system but should benefit from its services as well. Performing learning or mining from sequences of sensor measurements poses the issue of how to group events into aggregates of interests (i.e., actions, activities, situations). Even with supervised learning techniques, if labeling is provided at learning time, the same does not hold at runtime where a stream of events is fed into the AmI system. Even though most proposed approaches in the AmI literature (especially supervised learning ones) ignore this aspect, windowing mechanism are needed. As described in BIB003 , the different windowing methods can be classified into three main classes, namely, explicit, time-based and event-based. • Explicit segmentation. In this case, the stream is divided into chunks usually following some kind of classifier previously instructed over a training data set. Unfortunately, as the training data set simply cannot cover all the possible combinations of sensor events, the performance of such a kind of approach usually results in single activities divided into multiple chunks and multiple activities merged. • Time-based windowing. This approach divides the entire sequence into equal size time intervals. This is a good approach when dealing with data obtained from sources (e.g., sensors like accelerometers and gyroscopes) that operate continuously in time. As can be easily argued, the choice of the window size is fundamental especially in the case of sporadic sensors as a small window size could not contain enough information to be useful whereas a large window size could merge multiple activities when burst of sensors occur.
Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Abstract Many existing rule learning systems are computationally expensive on large noisy datasets. In this paper we evaluate the recently-proposed rule learning algorithm IREP on a large and diverse collection of benchmark problems. We show that while IREP is extremely efficient, it frequently gives error rates higher than those of C4.5 and C4.5rules. We then propose a number of modifications resulting in an algorithm RIPPERk that is very competitive with C4.5rules with respect to error rates, but much more efficient on large samples. RIPPERk obtains error rates lower than or equivalent to C4.5rules on 22 of 37 benchmark problems, scales nearly linearly with the number of training examples, and can efficiently process noisy datasets containing hundreds of thousands of examples. <s> BIB001 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> We are developing a personal activity recognition system that is practical, reliable, and can be incorporated into a variety of health-care related applications ranging from personal fitness to elder care. To make our system appealing and useful, we require it to have the following properties: (i) data only from a single body location needed, and it is not required to be from the same point for every user; (ii) should work out of the box across individuals, with personalization only enhancing its recognition abilities; and (iii) should be effective even with a cost-sensitive subset of the sensors and data features. In this paper, we present an approach to building a system that exhibits these properties and provide evidence based on data for 8 different activities collected from 12 different subjects. Our results indicate that the system has an accuracy rate of approximately 90% while meeting our requirements. We are now developing a fully embedded version of our system based on a cell-phone platform augmented with a Bluetooth-connected sensor board. <s> BIB002 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its annotation is described and made available to the community. Through a number of experiments we show how the hidden Markov model and conditional random fields perform in recognizing activities. We achieve a timeslice accuracy of 95.6% and a class accuracy of 79.4%. <s> BIB003 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> This paper presents a systematic design approach for constructing neural classifiers that are capable of classifying human activities using a triaxial accelerometer. The philosophy of our design approach is to apply a divide-and-conquer strategy that separates dynamic activities from static activities preliminarily and recognizes these two different types of activities separately. Since multilayer neural networks can generate complex discriminating surfaces for recognition problems, we adopt neural networks as the classifiers for activity recognition. An effective feature subset selection approach has been developed to determine significant feature subsets and compact classifier structures with satisfactory accuracy. Experimental results have successfully validated the effectiveness of the proposed recognition scheme. <s> BIB004 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Advances in technology have provided the ability to equip the home environment with a layer of technology to provide a truly 'Smart Home'. These homes offer improved living conditions and levels of independence for the population who require support with both physical and cognitive functions. At the core of the Smart Home is a collection of sensing technology which is used to monitor the behaviour of the inhabitant and their interactions with the environment. A variety of different sensors measuring light, sound, contact and motion provide sufficient multi-dimensional information about the inhabitant to support the inference of activity determination. A problem which impinges upon the success of any information analysis is the fact that sensors may not always provide reliable information due to either faults, operational tolerance levels or corrupted data. In this paper we address the fusion process of contextual information derived from uncertain sensor data. Based on a series of information handling techniques, most notably the Dempster-Shafer theory of evidence and the Equally Weighted Sum operator, evidential contextual information is represented, analysed and merged to achieve a consensus in automatically inferring activities of daily living for inhabitants in Smart Homes. Within the paper we introduce the framework within which uncertainty can be managed and demonstrate the effects that the number of sensors in conjunction with the reliability level of each sensor can have on the overall decision making process. <s> BIB005 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> This paper addresses the problem of learning situation models for providing context-aware services. Context for modeling human behavior in a smart environment is represented by a situation model describing environment, users, and their activities. A framework for acquiring and evolving different layers of a situation model in a smart environment is proposed. Different learning methods are presented as part of this framework: role detection per entity, unsupervised extraction of situations from multimodal data, supervised learning of situation representations, and evolution of a predefined situation model with feedback. The situation model serves as frame and support for the different methods, permitting to stay in an intuitive declarative framework. The proposed methods have been integrated into a whole system for smart home environment. The implementation is detailed, and two evaluations are conducted in the smart home environment. The obtained results validate the proposed approach. <s> BIB006 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> We explore a dense sensing approach that uses RFID sensor network technology to recognize human activities. In our setting, everyday objects are instrumented with UHF RFID tags called WISPs that are equipped with accelerometers. RFID readers detect when the objects are used by examining this sensor data, and daily activities are then inferred from the traces of object use via a Hidden Markov Model. In a study of 10 participants performing 14 activities in a model apartment, our approach yielded recognition rates with precision and recall both in the 90% range. This compares well to recognition with a more intrusive short-range RFID bracelet that detects objects in the proximity of the user; this approach saw roughly 95% precision and 60% recall in the same study. We conclude that RFID sensor networks are a promising approach for indoor activity monitoring. <s> BIB007 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Advancements in supporting fields have increased the likelihood that smart-home technologies will become part of our everyday environments. However, many of these technologies are brittle and do not adapt to the user's explicit or implicit wishes. Here, we introduce CASAS, an adaptive smart-home system that utilizes machine learning techniques to discover patterns in resident's daily activities and to generate automation polices that mimic these patterns. Our approach does not make any assumptions about the activity structure or other underlying model parameters but leaves it completely to our algorithms to discover the smart-home resident's patterns. Another important aspect of CASAS is that it can adapt to changes in the discovered patterns based on the resident implicit and explicit feedback and can automatically update its model to reflect the changes. In this paper, we provide a description of the CASAS technologies and the results of experiments performed on both synthetic and real-world data. <s> BIB008 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> The pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. A primary challenge that needs to be tackled to meet this need is the ability to recognize and track functional activities that people perform in their own homes and everyday settings. In this paper, we look at approaches to perform real-time recognition of Activities of Daily Living. We enhance other related research efforts to develop approaches that are effective when activities are interrupted and interleaved. To evaluate the accuracy of our recognition algorithms we assess them using real data collected from participants performing activities in our on-campus smart apartment testbed. <s> BIB009 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> By 2050, about one third of the French population will be over 65. Our laboratory's current research focuses on the monitoring of elderly people at home, to detect a loss of autonomy as early as possible. Our aim is to quantify criteria such as the international activities of daily living (ADL) or the French Autonomie Gerontologie Groupes Iso-Ressources (AGGIR) scales, by automatically classifying the different ADL performed by the subject during the day. A Health Smart Home is used for this. Our Health Smart Home includes, in a real flat, infrared presence sensors (location), door contacts (to control the use of some facilities), temperature and hygrometry sensor in the bathroom, and microphones (sound classification and speech recognition). A wearable kinematic sensor also informs postural transitions (using pattern recognition) and walk periods (frequency analysis). This data collected from the various sensors are then used to classify each temporal frame into one of the ADL that was previously acquired (seven activities: hygiene, toilet use, eating, resting, sleeping, communication, and dressing/undressing). This is done using support vector machines. We performed a 1-h experimentation with 13 young and healthy subjects to determine the models of the different activities, and then we tested the classification algorithm (cross validation) with real data. <s> BIB010 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Monitoring daily activities of a person has many potential benefits in pervasive computing. These include providing proactive support for the elderly and monitoring anomalous behaviors. A typical approach in existing research on activity detection is to construct sequence-based models of low-level activity features based on the order of object usage. However, these models have poor accuracy, require many parameters to estimate, and demand excessive computational effort. Many other supervised learning approaches have been proposed but they all suffer from poor scalability due to the manual labeling involved in the training process. In this paper, we simplify the activity modeling process by relying on the relevance weights of objects as the basis of activity discrimination rather than on sequence information. For each activity, we mine the web to extract the most relevant objects according to their normalized usage frequency. We develop a KeyExtract algorithm for activity recognition and two algorithms, MaxGap and MaxGain, for activity segmentation with linear time complexities. Simulation results indicate that our proposed algorithms achieve high accuracy in the presence of different noise levels indicating their good potential in real-world deployment. <s> BIB011 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Recognizing human activities from sensor readings has recently attracted much research interest in pervasive computing due to its potential in many applications, such as assistive living and healthcare. This task is particularly challenging because human activities are often performed in not only a simple (i.e., sequential), but also a complex (i.e., interleaved or concurrent) manner in real life. Little work has been done in addressing complex issues in such a situation. The existing models of interleaved and concurrent activities are typically learning-based. Such models lack of flexibility in real life because activities can be interleaved and performed concurrently in many different ways. In this paper, we propose a novel pattern mining approach to recognize sequential, interleaved, and concurrent activities in a unified framework. We exploit Emerging Pattern-a discriminative pattern that describes significant changes between classes of data-to identify sensor features for classifying activities. Different from existing learning-based approaches which require different training data sets for building activity models, our activity models are built upon the sequential activity trace only and can be applied to recognize both simple and complex activities. We conduct our empirical studies by collecting real-world traces, evaluating the performance of our algorithm, and comparing our algorithm with static and temporal models. Our results demonstrate that, with a time slice of 15 seconds, we achieve an accuracy of 90.96 percent for sequential activity, 88.1 percent for interleaved activity, and 82.53 percent for concurrent activity. <s> BIB012 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> The machine learning and pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. In order to monitor the functional health of smart home residents, we need to design technologies that recognize and track activities that people normally perform as part of their daily routines. Although approaches do exist for recognizing activities, the approaches are applied to activities that have been preselected and for which labeled training data are available. In contrast, we introduce an automated approach to activity tracking that identifies frequent activities that naturally occur in an individual's routine. With this capability, we can then track the occurrence of regular activities to monitor functional health and to detect changes in an individual's patterns and lifestyle. In this paper, we describe our activity mining and tracking approach, and validate our algorithms on data collected in physical smart environments. <s> BIB013 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> This paper considers scalable and unobtrusive activity recognition using on-body sensing for context awareness in wearable computing. Common methods for activity recognition rely on supervised learning requiring substantial amounts of labeled training data. Obtaining accurate and detailed annotations of activities is challenging, preventing the applicability of these approaches in real-world settings. This paper proposes new annotation strategies that substantially reduce the required amount of annotation. We explore two learning schemes for activity recognition that effectively leverage such sparsely labeled data together with more easily obtainable unlabeled data. Experimental results on two public data sets indicate that both approaches obtain results close to fully supervised techniques. The proposed methods are robust to the presence of erroneous labels occurring in real-world annotation data. <s> BIB014 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Intelligent Environments are expected to act proactively, anticipating the user's needs and preferences. To do that, the environment must somehow obtain knowledge of those need and preferences, but unlike current computing systems, in Intelligent Environments, the user ideally should be released from the burden of providing information or programming any device as much as possible. Therefore, automated learning of a user's most common behaviors becomes an important step towards allowing an environment to provide highly personalized services. In this article, we present a system that takes information collected by sensors as a starting point and then discovers frequent relationships between actions carried out by the user. The algorithm developed to discover such patterns is supported by a language to represent those patterns and a system of interaction that provides the user the option to fine tune their preferences in a natural way, just by speaking to the system. <s> BIB015 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> This paper introduces a knowledge-driven approach to real-time, continuous activity recognition based on multisensor data streams in smart homes. The approach goes beyond the traditional data-centric methods for activity recognition in three ways. First, it makes extensive use of domain knowledge in the life cycle of activity recognition. Second, it uses ontologies for explicit context and activity modeling and representation. Third and finally, it exploits semantic reasoning and classification for activity inferencing, thus enabling both coarse-grained and fine-grained activity recognition. In this paper, we analyze the characteristics of smart homes and Activities of Daily Living (ADL) upon which we built both context and ADL ontologies. We present a generic system architecture for the proposed knowledge-driven approach and describe the underlying ontology-based recognition process. Special emphasis is placed on semantic subsumption reasoning algorithms for activity recognition. The proposed approach has been implemented in a function-rich software system, which was deployed in a smart home research laboratory. We evaluated the proposed approach and the developed system through extensive experiments involving a number of various ADL use scenarios. An average activity recognition rate of 94.44 percent was achieved and the average recognition runtime per recognition operation was measured as 2.5 seconds. <s> BIB016 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Smart home activity recognition systems can learn generalized models for common activities that span multiple environment settings and resident types. <s> BIB017 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Real-time activity recognition in body sensor networks is an important and challenging task. In this paper, we propose a real-time, hierarchical model to recognize both simple gestures and complex activities using a wireless body sensor network. In this model, we first use a fast and lightweight algorithm to detect gestures at the sensor node level, and then propose a pattern based real-time algorithm to recognize complex, high-level activities at the portable device level. We evaluate our algorithms over a real-world dataset. The results show that the proposed system not only achieves good performance (an average utility of 0.81, an average accuracy of 82.87%, and an average real-time delay of 5.7 seconds), but also significantly reduces the network's communication cost by 60.2%. <s> BIB018 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> A major challenge of ubiquitous computing resides in the acquisition and modelling of rich and heterogeneous context data, among which, ongoing human activities at different degrees of granularity. In a previous work, we advocated the use of probabilistic description logics (DLs) in a multilevel activity recognition framework. In this paper, we present an in-depth study of activity modeling and reasoning within that framework, as well as an experimental evaluation with a large real-world dataset. Our solution allows us to cope with the uncertain nature of ontological descriptions of activities, while exploiting the expressive power and inference tools of the OWL 2 language. Targeting a large dataset of real human activities, we developed a probabilistic ontology modeling nearly 150 activities and actions of daily living. Experiments with a prototype implementation of our framework confirm the viability of our solution. <s> BIB019 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Activity recognition has received increasing attention from the machine learning community. Of particular interest is the ability to recognize activities in real time from streaming data, but this presents a number of challenges not faced by traditional offline approaches. Among these challenges is handling the large amount of data that does not belong to a predefined class. In this paper, we describe a method by which activity discovery can be used to identify behavioral patterns in observational data. Discovering patterns in the data that does not belong to a predefined class aids in understanding this data and segmenting it into learnable classes. We demonstrate that activity discovery not only sheds light on behavioral patterns, but it can also boost the performance of recognition algorithms. We introduce this partnership between activity discovery and online activity recognition in the context of the CASAS smart home project and validate our approach using CASAS data sets. <s> BIB020 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> The increasing aging population in the coming decades will result in many complications for society and in particular for the healthcare system due to the shortage of healthcare professionals and healthcare facilities. To remedy this problem, researchers have pursued developing remote monitoring systems and assisted living technologies by utilizing recent advances in sensor and networking technology, as well as in the data mining and machine learning fields. In this article, we report on our fully automated approach for discovering and monitoring patterns of daily activities. Discovering and tracking patterns of daily activities can provide unprecedented opportunities for health monitoring and assisted living applications, especially for older adults and individuals with mental disabilities. Previous approaches usually rely on preselected activities or labeled data to track and monitor daily activities. In this article, we present a fully automated approach by discovering natural activity patterns and their variations in real-life data. We will show how our activity discovery component can be integrated with an activity recognition component to track and monitor various daily activity patterns. We also provide an activity visualization component to allow caregivers to visually observe and examine the activity patterns using a user-friendly interface. We validate our algorithms using real-life data obtained from two apartments during a three-month period. <s> BIB021 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Many real-world applications that focus on addressing needs of a human, require information about the activities being performed by the human in real-time. While advances in pervasive computing have led to the development of wireless and non-intrusive sensors that can capture the necessary activity information, current activity recognition approaches have so far experimented on either a scripted or pre-segmented sequence of sensor events related to activities. In this paper we propose and evaluate a sliding window based approach to perform activity recognition in an on line or streaming fashion; recognizing activities as and when new sensor events are recorded. To account for the fact that different activities can be best characterized by different window lengths of sensor events, we incorporate the time decay and mutual information based weighting of sensor events within a window. Additional contextual information in the form of the previous activity and the activity of the previous window is also appended to the feature describing a sensor window. The experiments conducted to evaluate these techniques on real-world smart home datasets suggests that combining mutual information based weighting of sensor events and adding past contextual information to the feature leads to best performance for streaming activity recognition. <s> BIB022 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Container <s> Recognition of activities of daily living (ADLs) is an enabling technology for several ubiquitous computing applications. In this field, most activity recognition systems rely on supervised learning methods to extract activity models from labeled datasets. An inherent problem of that approach consists in the acquisition of comprehensive activity datasets, which is expensive and may violate individuals' privacy. The problem is particularly challenging when focusing on complex ADLs, which are characterized by large intra- and inter-personal variability of execution. In this paper, we propose an unsupervised method to recognize complex ADLs exploiting the semantics of activities, context data, and sensing devices. Through ontological reasoning, we derive semantic correlations among activities and sensor events. By matching observed sensor events with semantic correlations, a statistical reasoner formulates initial hypotheses about the occurred activities. Those hypotheses are refined through probabilistic reasoning, exploiting semantic constraints derived from the ontology. Extensive experiments with real-world datasets show that the accuracy of our unsupervised method is comparable to the one of state of the art supervised approaches. <s> BIB023
(c) Figure 7 . Ontologies used in CHEN-ONT BIB016 to model the aspects of smart spaces. (a) Ontology example used to model the Smart Environment domain. Picture inspired to CHEN-ONT BIB016 . (b) Ontology example used to model the activities correlations. Picture taken from CHEN-ONT BIB016 . (c) Ontology example used to model the sensors properties. Picture taken from CHEN-ONT BIB016 . In RIB-PROB BIB019 BIB023 , the multilevel model is obtained combining ontologies and/or grouping elements at previous levels. The Atomic Gestures model is obtained just considering log elements. The Manipulative Gestures are computed considering ontology and axioms. The Simple Activities are obtained grouping Manipulative Gestures. Finally, for Complex Activities, ontologies are involved. Figure 5c represents a portion of the resulting ontology model. The dashed lines represent the relations super/sub between classes. The individual classes have relations that describe dependencies. Moreover Description Logic is employed to support ontological reasoning, which allows also to check the knowledge base consistency. It also infers additional information from registered facts. In NUG-EVFUS BIB005 , the interrelationships between sensors, context and activities are represented as a hierarchical network of ontologies (see Figure 8) . A particular activity can be performed or associated with a certain room in the house, this information is modeled with an ontology of the network. Figure 8 . Hierarchical ontology structure adopted in NUG-EVFUS BIB005 to model activities in a smart space. In CASAS-HMM BIB009 , each activity is performed in a protected environment, and the resulting log is recorded and labeled. Then, HMM model is built upon this dataset in a supervised way. The resulting model is shown in Figure 2a . Observations (squares) model the sensor triggering, the states (circles) model the activities that can generate the observations according to certain probabilities. The goal is to infer the activities by processing the observations. This recognition technique is supporting single user data, but the problem of modeling multiple users is introduced. The same team, in CASAS-SVM BIB022 , employs SVM. In this second work, authors propose an interesting analysis of the different windowing strategies to be employed to gather measurements into observation vectors. Finally, in CASAS-HMMNBCRF BIB017 , experiments are performed with the same methodology adding CRF and NB modeling techniques to the analysis. In WANG-EP BIB012 , from log of sequential activities, Emerging Patterns are mined and the resulting set composes the model. In KROS-CRF BIB003 , the model is trained out from a labeled dataset. The log is divided in segments 60-s long and each segment is labeled. The dataset is composed by multi-days logs: one day is used for testing the approach, the remaining for training the models. The resulting model is an undirected graph as in Figure 2b . In REIG-SITUATION BIB006 , a SVM model, built on statistical values extracted from the measurements of a given user, is used for classifying the roles. Then, this information, combined with other statistically extracted ones, is involved into the training of the HMM that models the situations. In YANG-NN BIB004 , the input vector contains the features to consider, the output vector the classes (activities). The back-propagation learning algorithm is used for training the ANNs. Three neural networks are built on labeled logs: a pre-classifier and two classifiers; static activities and dynamic activities are modeled with separated ANNs. The structure of the neural classifier consists of an input layer, an hidden layer and an output layer. In LES-PHI BIB002 , given the maximum number of features the activity recognition system can use, the system automatically chooses the most discriminative sub-set of features and uses them to learn an ensemble of discriminative static classifiers for the activities that need to be recognized. Then, the class probabilities estimated from the static classifiers are used as inputs into HMMs. In BUE-WISPS BIB007 , the users are asked to perform activities. The resulting log is used for training an HMM. In FLEURY-MCSVM BIB010 , the classes of the classifier model the activities. Binary classifiers are built to distinguish activities: pairwise combinations selection. The number of SVMs for n activities will be n − 1. The features used are statistics from measurements. The algorithm proposed in CASAS-DISCOREC BIB020 BIB013 BIB021 is to improve the performance of activity recognition algorithms by trying to reduce the part of the dataset that has not been labeled during data acquisition. In particular, for the unlabeled section of the log, the authors employ a pattern mining technique in order to discover, in an unsupervised manner, human activity patterns. A pattern is defined here as a set of events where order is not specified and events can be repeated multiple times. Patterns are mined by iteratively compressing the sensor log. The data mining method used for activity discovery is completely unsupervised without the need of manually segmenting the dataset or choosing windows and allows to discover interwoven activities as well. Starting from singleton patterns, at each step, the proposed technique compresses the logs by exploiting them and iteratively reprocesses the compressed log for recognizing new patterns and further compress the log. When it is difficult to further compress the log, each remaining pattern represents an activity class. Discovered labels are employed to train HMM, BN and SVM models following the same approach as in the supervised works of the same group. In CASAS-HAM BIB008 , the sensor log is considered completely unlabeled. Here temporal patterns (patterns with the addition of temporal information) are discovered similarly as in CASAS-DISCOREC BIB020 BIB013 BIB021 and are used for structuring a tree of Markov Chains. Different activations in different timestamps generate new paths in the tree. Depending to temporal constraints, a sub-tree containing Markov Chains at the leafs that model activities is generated. A technique to update the model is also proposed. Here the goal of the model is the actuation of target devices more than recognition. Authors in STIK-MISVM BIB014 introduce a weakly supervised approach where two strategies are proposed for assigning labels to unlabeled data. The first strategy is based on the miSVM algorithm. miSVM is a SVM with two levels, the first for assigning labels to unlabeled data, the second one for applying recognition to the activities logs. The second strategy is instead called graph-based label propagation, where the nodes of the graphs are vectors of features. The nodes are connected by weighted edges, the weights represent similarity between nodes. When the entire training set is labeled, an SVM is trained for activity recognition. In AUG-APUBS BIB015 , the system generates ECA rules by considering the typology of the sensors involved in the measurements and the time relations between their activations. APUBS makes clear the difference between different categories of sensors: • Type O sensors installed in objects, thus, providing direct information about the actions of the users. • Type C sensors providing information about the environment (e.g., temperature, day of the week). • Type M sensors providing information about the position of the user inside the house (e.g., in the bedroom). Events in the event part of the ECA rule always come from sets O and M. Conditions are usually expressed in terms of the values provided by Type C sensors. Finally, the action part contains only Type O sensors. The set of Type O sensor is called mainSeT. The first step of the APUBS method consists of discovering, for each sensor in the mainSeT, the set associatedSeT of potential O and M sensors that can be potentially related to it as triggering events. The method employed is APriori for association rules ; the only difference is that possible association rules X ⇒ Y are limited to those where cardinality of both X and Y is unitary and Y only contains events contained in mainSeT. Obviously, this step requires a window size value to be specified in order to create transactions. As a second step, the technique discovers the temporal relationships between the events in associatedSeT and those in mainSeT. During this step, non-significant relations are pruned. As a third step, the conditions for the ECA rules are mined with a JRip classifier BIB001 . In WANG-HIER BIB018 , starting from the raw log, the authors use a K-Medoids clustering method to discover template gestures. This method finds the k representative instances which best represent the clusters. Based on these templates, gestures are identified applying a template matching algorithm: Dynamic Time Warping is a classic dynamic programming based algorithm to match two time series with temporal dynamics. In PALMES-OBJREL BIB011 , the KeyExtract algorithm mines keys from the web that best identify activities. For each activity, the set of most important keys is mined. In the recognition phase, an unsupervised segmentation based on heuristics is performed. Table 4 shows a quick recap of this section and answers questions RQ-B2.1 (Technique Class), RQ-B2.2 (Multi-user Support) and RQ-B2.3 (Additional Labeling Requirement). BIB015 Unsupervised S N CASAS-HAM BIB008 Unsupervised S N WANG-HIER BIB018 Unsupervised S N PALMES-OBJREL BIB011 Unsupervised S N
Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Related Work <s> Pervasive systems must offer an open, extensible, and evolving portfolio of services which integrate sensor data from a diverse range of sources. The core challenge is to provide appropriate and consistent adaptive behaviours for these services in the face of huge volumes of sensor data exhibiting varying degrees of precision, accuracy and dynamism. Situation identification is an enabling technology that resolves noisy sensor data and abstracts it into higher-level concepts that are interesting to applications. We provide a comprehensive analysis of the nature and characteristics of situations, discuss the complexities of situation identification, and review the techniques that are most popularly used in modelling and inferring situations from sensor data. We compare and contrast these techniques, and conclude by identifying some of the open research opportunities in the area. <s> BIB001 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Related Work <s> Development of context-aware applications is inherently complex. These applications adapt to changing context information: physical context, computational context, and user context/tasks. Context information is gathered from a variety of sources that differ in the quality of information they produce and that are often failure prone. The pervasive computing community increasingly understands that developing context-aware applications should be supported by adequate context information modelling and reasoning techniques. These techniques reduce the complexity of context-aware applications and improve their maintainability and evolvability. In this paper we discuss the requirements that context modelling and reasoning techniques should meet, including the modelling of a variety of context information types and their relationships, of situations as abstractions of context information facts, of histories of context information, and of uncertainty of context information. This discussion is followed by a description and comparison of current context modelling and reasoning techniques. <s> BIB002 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Related Work <s> It is essential for environments that aim at helping people in their daily life that they have some sort of Ambient Intelligence. Learning the preferences and habits of users then becomes an important step in allowing a system to provide such personalized services. Thus far, the exploration of these issues by the scientific community has not been extensive, but interest in the area is growing. Ambient Intelligence environments have special characteristics that have to be taken into account during the learning process. We identify these characteristics and use them to highlight the strengths and weaknesses of developments so far, providing direction to encourage further development in this specific area of Ambient Intelligence. <s> BIB003 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Related Work <s> Research on sensor-based activity recognition has, recently, made significant progress and is attracting growing attention in a number of disciplines and application domains. However, there is a lack of high-level overview on this topic that can inform related communities of the research state of the art. In this paper, we present a comprehensive survey to examine the development and current status of various aspects of sensor-based activity recognition. We first discuss the general rationale and distinctions of vision-based and sensor-based activity recognition. Then, we review the major approaches and methods associated with sensor-based activity monitoring, modeling, and recognition from which strengths and weaknesses of those approaches are highlighted. We make a primary distinction in this paper between data-driven and knowledge-driven approaches, and use this distinction to structure our survey. We also discuss some promising directions for future research. <s> BIB004 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Related Work <s> Ambient Intelligence (AmI) is a new paradigm in information technology aimed at empowering people's capabilities by the means of digital environments that are sensitive, adaptive, and responsive to human needs, habits, gestures, and emotions. This futuristic vision of daily environment will enable innovative human-machine interactions characterized by pervasive, unobtrusive and anticipatory communications. Such innovative interaction paradigms make ambient intelligence technology a suitable candidate for developing various real life solutions, including in the health care domain. This survey will discuss the emergence of ambient intelligence (AmI) techniques in the health care domain, in order to provide the research community with the necessary background. We will examine the infrastructure and technology required for achieving the vision of ambient intelligence, such as smart environments and wearable medical devices. We will summarize of the state of the art artificial intelligence methodologies used for developing AmI system in the health care domain, including various learning techniques (for learning from user interaction), reasoning techniques (for reasoning about users' goals and intensions) and planning techniques (for planning activities and interactions). We will also discuss how AmI technology might support people affected by various physical or mental disabilities or chronic disease. Finally, we will point to some of the successful case studies in the area and we will look at the current and future challenges to draw upon the possible future research paths. <s> BIB005 </s> Surveying Human Habit Modeling and Mining Techniques in Smart Spaces <s> Related Work <s> The technology of Smart Homes (SH), as an instance of ambient assisted living technologies, is designed to assist the homes’ residents accomplishing their daily-living activities and thus having a better quality of life while preserving their privacy. A SH system is usually equipped with a collection of inter-related software and hardware components to monitor the living space by capturing the behaviour of the resident and understanding his activities. By doing so the system can inform about risky situations and take actions on behalf of the resident to his satisfaction. The present survey will address technologies and analysis methods and bring examples of the state of the art research studies in order to provide background for the research community. In particular, the survey will expose infrastructure technologies such as sensors and communication platforms along with artificial intelligence techniques used for modeling and recognizing activities. A brief overview of approaches used to develop Human–Computer interfaces for SH systems is given. The survey also highlights the challenges and research trends in this area. <s> BIB006
The literature contains several surveys attempting to classify works in the field of smart spaces and ambient intelligence. Papers are presented in chronological orders. None of the reported surveys, clearly states the modality by which papers have been selected. Authors in BIB003 follow an approach similar to this work, i.e., they separately analyze the different phases of the life-cycle of the models. Differently from our work, for what concerns the model construction phase, they focus on classes of learning algorithms instead of analyzing the specific work. Additionally, specification-based methods are not taken into account. The survey BIB002 is focused on logical formalisms to represent ambient intelligence contexts and reasoning about them. The analyzed approaches are solely specification-based. Differently from our work, the survey is focused on the reasoning aspect, which is not the focus of our work. The work [50] is an extensive analysis of methods employed in ambient intelligence. This work separately analyzes the different methods without clearly defining a taxonomy. Authors in BIB001 introduce a clear taxonomy of approaches in the field of context recognition (and more generally, about situation identification). The survey embraces the vast majority of the proposed approaches in the area. Equivalently, paper BIB004 is a complete work covering not only activity recognition but also fine-grained action recognition. Differently from our work, both surveys are not focusing on the life-cycle of models. Authors in BIB005 focus on reviewing the possible applications of ambient intelligence in the specific case of health and elderly care. The work is orthogonal to the present paper and all the other reported works, as it is less focused on the pros and cons of each approach, while instead focusing on applications and future perspectives. A manifesto of the applications and principles behind smart spaces and ambient intelligence is presented in . As in BIB005 , authors in BIB006 take moves from the health care application scenario in order to describe possible applications. Anyway, this work goes more into details of employed techniques with particular focus on classical machine learning methods.
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> BACKGROUND ::: In the era of evidence based medicine, with systematic reviews as its cornerstone, adequate quality assessment tools should be available. There is currently a lack of a systematically developed and evaluated tool for the assessment of diagnostic accuracy studies. The aim of this project was to combine empirical evidence and expert opinion in a formal consensus method to develop a tool to be used in systematic reviews to assess the quality of primary studies of diagnostic accuracy. ::: ::: ::: METHODS ::: We conducted a Delphi procedure to develop the quality assessment tool by refining an initial list of items. Members of the Delphi panel were experts in the area of diagnostic research. The results of three previously conducted reviews of the diagnostic literature were used to generate a list of potential items for inclusion in the tool and to provide an evidence base upon which to develop the tool. ::: ::: ::: RESULTS ::: A total of nine experts in the field of diagnostics took part in the Delphi procedure. The Delphi procedure consisted of four rounds, after which agreement was reached on the items to be included in the tool which we have called QUADAS. The initial list of 28 items was reduced to fourteen items in the final tool. Items included covered patient spectrum, reference standard, disease progression bias, verification bias, review bias, clinical review bias, incorporation bias, test execution, study withdrawals, and indeterminate results. The QUADAS tool is presented together with guidelines for scoring each of the items included in the tool. ::: ::: ::: CONCLUSIONS ::: This project has produced an evidence based quality assessment tool to be used in systematic reviews of diagnostic accuracy studies. Further work to determine the usability and validity of the tool continues. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> An adjustable chiropractic diagnostic apparatus 910) for measuring the postural deficiencies of a patient (100) wherein, the apparatus (10) comprises a posterior (30) and a lateral (31) framework member each having a single vertical alignment cord (60)(60') and a plurality of horizontal alignment cords (61)(61')(62)(62') which are connected together by body marker members (65) that are translatable to specific locations on the patient's body once the vertical (60) and lateral (61)(62) alignment cords have been aligned with other portions of the users body to produce a recordable record of the patient's postural deficiencies. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Diagnostic tests are often much less rigorously evaluated than new drugs. It is time to ensure that the harms and benefits of new tests are fully understood ::: ::: No international consensus exists on the methods for assessing diagnostic tests. Previous recommendations stress that studies of diagnostic tests should match the type of diagnostic question.1 2 Once the specificity and sensitivity of a test have been established, the final question is whether tested patients fare better than similar untested patients. This usually requires a randomised trial. Few tests are currently evaluated in this way. In this paper, we propose an architecture for research into diagnostic tests that parallels the established phases in drug research. ::: ::: We have divided studies of diagnostic tests into four phases (box). We use research on brain natriuretic peptide for diagnosing heart failure as an illustrative example.2 However, the architecture is applicable to a wide range of tests including laboratory techniques, diagnostic imaging, pathology, evaluation of disability, electrodiagnostic tests, and endoscopy. ::: ::: In drug research, phase I studies deal with pharmacokinetics, pharmacodynamics, and safe doses.3 Phase I diagnostic studies are done to determine the range of results obtained with a newly developed test in healthy people. For example, after development of a test to measure brain natriuretic peptide in human plasma, phase I studies were done to establish the normal range of values in healthy participants.4 5 ::: ::: ![][1] ::: ::: The harms and benefits of diagnostic tests needs evaluating—just as drugs do ::: ::: Credit: GUSTO/SPL ::: ::: Diagnostic phase I studies must be large enough to examine the potential influence of characteristics such as sex, age, time of day, physical activity, and exposure to drugs. The studies are relatively quick, cheap, and easy to conduct, but they may occasionally raise ethical problems—for example, finding abnormal results in an apparently healthy person.6 … ::: ::: [1]: /embed/graphic-1.gif <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> The analysis of subgroups is often used as a way to glean additional information from data sets. The strengths and weaknesses of this approach and new Journal policies concerning the reporting of subgroup analyses are discussed in this article. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> BackgroundOur objective was to develop an instrument to assess the methodological quality of systematic reviews, building upon previous tools, empirical evidence and expert consensus.MethodsA 37-item assessment tool was formed by combining 1) the enhanced Overview Quality Assessment Questionnaire (OQAQ), 2) a checklist created by Sacks, and 3) three additional items recently judged to be of methodological importance. This tool was applied to 99 paper-based and 52 electronic systematic reviews. Exploratory factor analysis was used to identify underlying components. The results were considered by methodological experts using a nominal group technique aimed at item reduction and design of an assessment tool with face and content validity.ResultsThe factor analysis identified 11 components. From each component, one item was selected by the nominal group. The resulting instrument was judged to have face and content validity.ConclusionA measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed. The tool consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews. Additional studies are needed with a focus on the reproducibility and construct validity of AMSTAR, before strong recommendations can be made on its use. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Purpose: To determine the quality of reporting of diagnostic accuracy studies before and after the Standards for Reporting of Diagnostic Accuracy (STARD) statement publication and to determine whether there is a difference in the quality of reporting by comparing STARD (endorsing) and non-STARD (nonendorsing) journals. Materials and Methods: Diagnostic accuracy studies were identified by hand searching six STARD and six non-STARD journals for 2001, 2002, 2004, and 2005. Diagnostic accuracy studies (n = 240) were assessed by using a checklist of 13 of 25 STARD items. The change in the mean total score on the modified STARD checklist was evaluated with analysis of covariance. The change in proportion of times that each individual STARD item was reported before and after STARD statement publication was evaluated (χ2 tests for linear trend). Results: With mean total score as dependent factor, analysis of covariance showed that the interaction between the two independent factors (STARD or non-STARD journal and... <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> BACKGROUND ::: Missed or delayed diagnoses are a common but understudied area in patient safety research. To better understand the types, causes, and prevention of such errors, we surveyed clinicians to solicit perceived cases of missed and delayed diagnoses. ::: ::: ::: METHODS ::: A 6-item written survey was administered at 20 grand rounds presentations across the United States and by mail at 2 collaborating institutions. Respondents were asked to report 3 cases of diagnostic errors and to describe their perceived causes, seriousness, and frequency. ::: ::: ::: RESULTS ::: A total of 669 cases were reported by 310 clinicians from 22 institutions. After cases without diagnostic errors or lacking sufficient details were excluded, 583 remained. Of these, 162 errors (28%) were rated as major, 241 (41%) as moderate, and 180 (31%) as minor or insignificant. The most common missed or delayed diagnoses were pulmonary embolism (26 cases [4.5% of total]), drug reactions or overdose (26 cases [4.5%]), lung cancer (23 cases [3.9%]), colorectal cancer (19 cases [3.3%]), acute coronary syndrome (18 cases [3.1%]), breast cancer (18 cases [3.1%]), and stroke (15 cases [2.6%]). Errors occurred most frequently in the testing phase (failure to order, report, and follow-up laboratory results) (44%), followed by clinician assessment errors (failure to consider and overweighing competing diagnosis) (32%), history taking (10%), physical examination (10%), and referral or consultation errors and delays (3%). ::: ::: ::: CONCLUSIONS ::: Physicians readily recalled multiple cases of diagnostic errors and were willing to share their experiences. Using a new taxonomy tool and aggregating cases by diagnosis and error type revealed patterns of diagnostic failures that suggested areas for improvement. Systematic solicitation and analysis of such errors can identify potential preventive strategies. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> In this paper we set out what we consider to be a set of best practices for statisticians in the reporting of pharmaceutical industry-sponsored clinical trials. We make eight recommendations covering: author responsibilities and recognition; publication timing; conflicts of interest; freedom to act; full author access to data; trial registration and independent review. These recommendations are made in the context of the prominent role played by statisticians in the design, conduct, analysis and reporting of pharmaceutical sponsored trials and the perception of the reporting of these trials in the wider community. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies. <s> BIB009 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> The treatment policy of chronic myeloid leukemia (CML), particularly with tyrosine kinase inhibitors, has been influenced by several recent studies that were well designed and rapidly performed, but their interpretation is of some concern because different end points and methodologies were used. To understand and compare the results of the previous and future studies and to translate their conclusion into clinical practice, there is a need for common definitions and methods for analyses of CML studies. A panel of experts was appointed by the European LeukemiaNet with the aim of developing a set of definitions and recommendations to be used in design, analyses, and reporting of phase 3 clinical trials in this disease. This paper summarizes the consensus of the panel on events and major end points of interest in CML. It also focuses on specific issues concerning the intention-to-treat principle and longitudinal data analyses in the context of long-term follow-up. The panel proposes that future clinical trials follow these recommendations. <s> BIB010 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> In 2003, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was published in 13 biomedical journals.1 ,2 Diagnostic accuracy studies provide estimates of a test's ability to discriminate between patients with and without a predefined condition, by comparing the test results against a clinical reference standard. The STARD initiative was developed in response to accumulating evidence of poor methodological quality and poor reporting among test accuracy studies in the prior years.3 ,4 The STARD checklist contains 25 items which invite authors and reviewers to verify that critical information about the study is included in the study report. In addition, a flow chart that specifies the number of included and excluded patients and characterises the flow of participants through the study is strongly recommended. Since its launch, the STARD checklist has been adopted by over 200 biomedical journals (http://www.stard-statement.org/). ::: ::: Over the past 20 years, reporting guidelines have been developed and evaluated in many different fields of research. Although a modest increase in reporting quality is sometimes noticed in the years following the introduction of such guidelines,5 ,6 improvements in adherence tend to be slow.7 This makes it difficult to make statements about the impact of such guidelines. For STARD, there has been some controversy around its effect.8 While one study noticed a small increase in reporting quality of diagnostic accuracy studies shortly after the introduction of STARD,9 another study could not confirm this.10 ::: ::: Systematic reviews can provide more precise and more generalisable estimates of effect. A recently published systematic review evaluated adherence to several reporting guidelines in different fields of research, but STARD was not among the evaluated guidelines.11 To fill this gap, we systematically reviewed all the studies that aimed to investigate diagnostic accuracy studies’ adherence to … <s> BIB011 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> AIMS ::: Diagnostic accuracy studies determine the clinical value of non-invasive cardiac imaging tests. The 'STAndards for the Reporting of Diagnostic accuracy studies' (STARD) were published in 2003 to improve the quality of study reporting. We aimed to assess the reporting quality of cardiac computed tomography (CCT), single positron emission computed tomography (SPECT), and cardiac magnetic resonance (CMR) diagnostic accuracy studies; to evaluate the impact of STARD; and to investigate the relationships between reporting quality, journal impact factor, and study citation index. ::: ::: ::: METHODS AND RESULTS ::: We randomly generated six groups of 50 diagnostic accuracy studies: 'CMR 1995-2002', 'CMR 2004-11', 'CCT 1995-2002', 'CCT 2004-11', 'SPECT 1995-2002', and 'SPECT 2004-11'. The 300 studies were double-read by two blinded reviewers and reporting quality determined by % adherence to the 25 STARD criteria. Reporting quality increased from 65.3% before STARD to 74.1% after (P = 0.003) in CMR studies and from 61.6 to 79.0% (P < 0.001) in CCT studies. SPECT studies showed no significant change: 71.9% before and 71.5% after STARD (P = 0.92). Journals advising authors to refer to STARD had significantly higher impact factors than those that did not (P = 0.03), and journals with above-median impact factors published studies of significantly higher reporting quality (P < 0.001). Since STARD, citation index has not significantly increased (P = 0.14), but, after adjustment for impact factor, reporting quality continues to increase by ∼1.5% each year. ::: ::: ::: CONCLUSION ::: Reporting standards for diagnostic accuracy studies of non-invasive cardiac imaging are at most satisfactory and have improved since the introduction of STARD. Adherence to STARD should be mandatory for authors of diagnostic accuracy studies. <s> BIB012 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Diagnostic errors have emerged as a serious patient safety problem but they are hard to detect and complex to define. At the research summit of the 2013 Diagnostic Error in Medicine 6th International Conference, we convened a multidisciplinary expert panel to discuss challenges in defining and measuring diagnostic errors in real-world settings. In this paper, we synthesize these discussions and outline key research challenges in operationalizing the definition and measurement of diagnostic error. Some of these challenges include 1) difficulties in determining error when the disease or diagnosis is evolving over time and in different care settings, 2) accounting for a balance between underdiagnosis and overaggressive diagnostic pursuits, and 3) determining disease diagnosis likelihood and severity in hindsight. We also build on these discussions to describe how some of these challenges can be addressed while conducting research on measuring diagnostic error. <s> BIB013 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> The first major study of the quality of statistical orting in the biomedical literature was published in 6 (Schor and Karten, 1966). Since then, dozens of ilar studies have been published, every one of which found that large proportions of articles contain errors he application, analysis, interpretation, or reporting of istics or in the design or conduct of research (see, for mple, Altman, 1991; Avram et al., 1985; Bakker and herts, 2011; Gardner et al., 1983; Glantz, 1980; frey, 1985; Gore et al., 1977; Kurichi and Sonnad, 6; Lionel and Herxheimer, 1970; Murray, 1991; Nagele, 3; Neville et al., 2006; Pocock et al., 1987; Scales et al., 5; White, 1979; Yancy, 1990). Further, large propors of these errors are serious enough to call the authors’ clusions into question (Glantz, 1980; Murray, 1991; cy, 1990). The problem is made worse by the fact that st of these studies are of the world’s leading peeriewed general medical and specialty journals. Although errors have been found in more complex statistical procedures (Burton and Altman, 2004; Mackinnon, 2010; Schwarzer et al., 2000), paradoxically, many errors are in basic, not advanced, statistical methods (George, 1985). Perhaps advanced methods are suggested by consulting statisticians, who then competently perform the analyses, but it is also true that authors are far more likely to use only elementary statistical methods, if they use any at all (Emerson and Colditz, 1985; George, 1985; Golden et al., 1996; Lee et al., 2004). Still, articles with even major errors continue to pass editorial and peer review and to be published in leading journals. The truth is that the problem of poor statistical reporting is long-standing, widespread, potentially serious, concerns mostly basic statistics, and yet is largely unsuspected by most readers of the biomedical literature (Lang and Secic, 2006). More than 30 years ago, O’Fallon and colleagues recommended that ‘‘Standards governing the content and format of statistical aspects should be developed to guide authors in the preparation of manuscripts’’ (O’Fallon et al., 1978). Despite the fact that this call has since been echoed by several others (Altman and Bland, 1991; Altman et al., 1983; Hayden, 1983; Murray, 1991; Pocock et al., 1987; Shott, 1985), most journals have still not included in their Instructions for Authors more than a paragraph or two about reporting statistical methods (Bailar and Mosteller, 1988). However, given that many statistical errors concern basic statistics, a comprehensive — and comprehensible — set of reporting guidelines might improve how statistical analyses are documented. In light of the above, we present here a set of statistical reporting guidelines suitable for medical journals to include in their Instructions for Authors. These guidelines tell authors, journal editors, and reviewers how to report basic statistical methods and results. Although these This paper was originally published in: Smart P, Maisonneuve H, erman A (eds). Science Editors’ Handbook, European Association of nce Editors, 2013. Reproduced with kind permission as part of a series lassic Methods papers. An introductory Commentary is available as ://dx.doi.org/10.1016/j.ijnurstu.2014.09.007. <s> BIB014 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org). <s> BIB015 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Abstract To perform a systematic review assessing accuracy and completeness of diagnostic studies of procalcitonin (PCT) for early-onset neonatal sepsis (EONS) using the Standards for Reporting of Diagnostic Accuracy (STARD) initiative. EONS, diagnosed during the first 3 days of life, remains a common and serious problem. Increased PCT is a potentially useful diagnostic marker of EONS, but reports in the literature are contradictory. There are several possible explanations for the divergent results including the quality of studies reporting the clinical usefulness of PCT in ruling in or ruling out EONS. We systematically reviewed PubMed, Scopus, and the Cochrane Library databases up to October 1, 2014. Studies were eligible for inclusion in our review if they provided measures of PCT accuracy for diagnosing EONS. A data extraction form based on the STARD checklist and adapted for neonates with EONS was used to appraise the quality of the reporting of included studies. We found 18 articles (1998–2014) fulfilling our eligibility criteria which were included in the final analysis. Overall, the results of our analysis showed that the quality of studies reporting diagnostic accuracy of PCT for EONS was suboptimal leaving ample room for improvement. Information on key elements of design, analysis, and interpretation of test accuracy were frequently missing. Authors should be aware of the STARD criteria before starting a study in this field. We welcome stricter adherence to this guideline. Well-reported studies with appropriate designs will provide more reliable information to guide decisions on the use and interpretations of PCT test results in the management of neonates with EONS. <s> BIB016 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> OBJECTIVE ::: To determine the rate with which diagnostic test accuracy studies that are published in a general radiology journal adhere to the Standards for Reporting of Diagnostic Accuracy Studies (STARD) 2015, and to explore the relationship between adherence rate and citation rate while avoiding confounding by journal factors. ::: ::: ::: MATERIALS AND METHODS ::: All eligible diagnostic test accuracy studies that were published in the Korean Journal of Radiology in 2011-2015 were identified. Five reviewers assessed each article for yes/no compliance with 27 of the 30 STARD 2015 checklist items (items 28, 29, and 30 were excluded). The total STARD score (number of fulfilled STARD items) was calculated. The score of the 15 STARD items that related directly to the Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-2 was also calculated. The number of times each article was cited (as indicated by the Web of Science) after publication until March 2016 and the article exposure time (time in months between publication and March 2016) were extracted. ::: ::: ::: RESULTS ::: Sixty-three articles were analyzed. The mean (range) total and QUADAS-2-related STARD scores were 20.0 (14.5-25) and 11.4 (7-15), respectively. The mean citation number was 4 (0-21). Citation number did not associate significantly with either STARD score after accounting for exposure time (total score: correlation coefficient = 0.154, p = 0.232; QUADAS-2-related score: correlation coefficient = 0.143, p = 0.266). ::: ::: ::: CONCLUSION ::: The degree of adherence to STARD 2015 was moderate for this journal, indicating that there is room for improvement. When adjusted for exposure time, the degree of adherence did not affect the citation rate. <s> BIB017 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Importance ::: While guidance on statistical principles for clinical trials exists, there is an absence of guidance covering the required content of statistical analysis plans (SAPs) to support transparency and reproducibility. ::: ::: ::: Objective ::: To develop recommendations for a minimum set of items that should be addressed in SAPs for clinical trials, developed with input from statisticians, previous guideline authors, journal editors, regulators, and funders. ::: ::: ::: Design ::: Funders and regulators (n = 39) of randomized trials were contacted and the literature was searched to identify existing guidance; a survey of current practice was conducted across the network of UK Clinical Research Collaboration-registered trial units (n = 46, 1 unit had 2 responders) and a Delphi survey (n = 73 invited participants) was conducted to establish consensus on SAPs. The Delphi survey was sent to statisticians in trial units who completed the survey of current practice (n = 46), CONSORT (Consolidated Standards of Reporting Trials) and SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) guideline authors (n = 16), pharmaceutical industry statisticians (n = 3), journal editors (n = 9), and regulators (n = 2) (3 participants were included in 2 groups each), culminating in a consensus meeting attended by experts (N = 12) with representatives from each group. The guidance subsequently underwent critical review by statisticians from the surveyed trial units and members of the expert panel of the consensus meeting (N = 51), followed by piloting of the guidance document in the SAPs of 5 trials. ::: ::: ::: Findings ::: No existing guidance was identified. The registered trials unit survey (46 responses) highlighted diversity in current practice and confirmed support for developing guidance. The Delphi survey (54 of 73, 74% participants completing both rounds) reached consensus on 42% (n = 46) of 110 items. The expert panel (N = 12) agreed that 63 items should be included in the guidance, with an additional 17 items identified as important but may be referenced elsewhere. Following critical review and piloting, some overlapping items were combined, leaving 55 items. ::: ::: ::: Conclusions and Relevance ::: Recommendations are provided for a minimum set of items that should be addressed and included in SAPs for clinical trials. Trial registration, protocols, and statistical analysis plans are critically important in ensuring appropriate reporting of clinical trials. <s> BIB018 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> The quality of reporting practice guidelines is often poor, and there is no widely accepted guidance or standards for such reporting in health care. The international RIGHT (Reporting Items for practice Guidelines in HealThcare) Working Group was established to address this gap. The group followed an existing framework for developing guidelines for health research reporting and the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network approach. A checklist and an explanation and elaboration statement were developed. The RIGHT checklist includes 22 items that are considered essential for good reporting of practice guidelines: basic information (items 1 to 4), background (items 5 to 9), evidence (items 10 to 12), recommendations (items 13 to 15), review and quality assurance (items 16 and 17), funding and declaration and management of interests (items 18 and 19), and other information (items 20 to 22). The RIGHT checklist can assist developers in reporting guidelines, support journal editors and peer reviewers when considering guideline reports, and help health care practitioners understand and implement a guideline. <s> BIB019 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> BACKGROUND ::: Diagnostic tests are used frequently in the emergency department (ED) to guide clinical decision making and, hence, influence clinical outcomes. The Standards for Reporting of Diagnostic Accuracy (STARD) criteria were developed to ensure that diagnostic test studies are performed and reported to best inform clinical decision making in the ED. ::: ::: ::: OBJECTIVE ::: The objective was to determine the extent to which diagnostic studies published in emergency medicine journals adhered to STARD 2003 criteria. ::: ::: ::: METHODS ::: Diagnostic studies published in eight MEDLINE-listed, peer-reviewed, emergency medicine journals over a 5-year period were reviewed for compliance to STARD criteria. ::: ::: ::: RESULTS ::: A total of 12,649 articles were screened and 114 studies were included in our study. Twenty percent of these were randomly selected for assessment using STARD 2003 criteria. Adherence to STARD 2003 reporting standards for each criteria ranged from 8.7% adherence (criteria-reporting adverse events from performing index test or reference standard) to 100% (multiple criteria). ::: ::: ::: CONCLUSION ::: Just over half of STARD criteria are reported in more than 80% studies. As poorly reported studies may negatively impact their clinical usefulness, it is essential that studies of diagnostic test accuracy be performed and reported adequately. Future studies should assess whether studies have improved compliance with the STARD 2015 criteria amendment. <s> BIB020 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> Research is of little use if its results are not effectively communicated. Data visualised in tables (and graphs) are key components in any scientific report, but their design leaves much to be desired. This article focuses on table design, following two general principles: clear vision and clear understanding. Clear vision is achieved by maximising the signal to noise ratio. In a table, the signal is the data in the form of numbers, and the noise is the support structure necessary to interpret the numbers. Clear understanding is achieved when the story in the data is told effectively, through organisation of the data and use of text. These principles are illustrated by original and improved tables from recent publications. Two special cases are discussed separately: tables produced by the pharmaceutical industry (in clinical study reports and reports to data safety monitoring boards), and study flow diagrams as proposed by the Consolidated Standards of Reporting Trials and Preferred Reporting Items for Systematic Reviews and Meta-Analyses initiatives. <s> BIB021 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> ABSTRACT The number of published systematic reviews of studies of healthcare interventions has increased rapidly and these are used extensively for clinical and policy decisions. Systematic reviews are subject to a range of biases and increasingly include non‐randomised studies of interventions. It is important that users can distinguish high quality reviews. Many instruments have been designed to evaluate different aspects of reviews, but there are few comprehensive critical appraisal instruments. AMSTAR was developed to evaluate systematic reviews of randomised trials. In this paper, we report on the updating of AMSTAR and its adaptation to enable more detailed assessment of systematic reviews that include randomised or non‐randomised studies of healthcare interventions, or both. With moves to base more decisions on real world observational evidence we believe that AMSTAR 2 will assist decision makers in the identification of high quality systematic reviews, including those based on non‐randomised studies of healthcare interventions. <s> BIB022 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Introduction <s> PURPOSE ::: To evaluate adherence of diagnostic accuracy studies in imaging journals to the STAndards for Reporting of Diagnostic accuracy studies (STARD) 2015. The secondary objective was to identify differences in reporting for magnetic resonance imaging (MRI) studies. ::: ::: ::: MATERIALS AND METHODS ::: MEDLINE was searched for diagnostic accuracy studies published in imaging journals in 2016. Studies were evaluated for adherence to STARD 2015 (30 items, including expanded imaging specific subitems). Evaluation for differences in STARD adherence based on modality, impact factor, journal STARD adoption, country, subspecialty area, study design, and journal was performed. ::: ::: ::: RESULTS ::: Adherence (n = 142 studies) was 55% (16.6/30 items, SD = 2.2). Index test description (including imaging-specific subitems) and interpretation were frequently reported (>66% of studies); no important differences in reporting of individual items were identified for studies on MRI. Infrequently reported items (<33% of studies) included some critical to generalizability (study setting and location) and assessment of bias (blinding of assessor of reference standard). New STARD 2015 items: sample size calculation, protocol reporting, and registration were infrequently reported. Higher impact factor (IF) journals reported more items than lower IF journals (17.2 vs. 16 items; P = 0.001). STARD adopter journals reported more items than nonadopters (17.5 vs. 16.4 items; P = 0.01). Adherence varied between journals (P = 0.003). No variability for study design (P = 0.32), subspecialty area (P = 0.75), country (P = 0.28), or imaging modality (P = 0.80) was identified. ::: ::: ::: CONCLUSION ::: Imaging accuracy studies show moderate adherence to STARD 2015, with only minor differences for studies evaluating MRI. This baseline evaluation will guide targeted interventions towards identified deficiencies and help track progress in reporting. ::: ::: ::: LEVEL OF EVIDENCE ::: 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:523-544. <s> BIB023
An accurate and timely diagnostic with the smallest probability of misdiagnosis, missed diagnosis, or delayed diagnosis is crucial in the management of any disease BIB007 . e diagnostic is an evolving process since both disease (the likelihood and the severity of the disease) and diagnostic approaches evolve BIB013 . In clinical practice, it is essential to correctly identify the diagnostic test that is useful to a specific patient with a specific condition [4] . e over-or underdiagnostic closely reflects on unnecessary or no treatment and harms both the subjects and the health-care systems BIB013 . Statistical methods used to assess a sign or a symptom in medicine depend on the phase of the study and are directly related to the research question and the design of the experiment (Table 1) BIB003 . A significant effort was made to develop the standards in reporting clinical studies, both for primary (e.g., casecontrol studies, cohort studies, and clinical trials) and secondary (e.g., systematic review and meta-analysis) research. e effort led to the publication of four hundred twelve guidelines available on the EQUATOR Network on April 20, 2019 [8] . Each guideline is accompanied by a short checklist describing the information needed to be present in each section and also include some requirements on the presentation of statistical results (information about what, e.g., mean (SD) where SD is the standard deviation, and how to report, e.g., the number of decimals). ese guidelines are also used as support in the critical evaluation of an article in evidence-based clinical practice. However, insufficient attention has been granted to the minimum set of items or methods and their quality in reporting the results. Different designs of experiments received more attention, and several statistical guidelines, especially for clinical trials, were developed to standardize the content of the statistical analysis plan BIB018 , for phase III clinical trials in myeloid leukemia BIB010 , pharmaceutical industry-sponsored clinical trials BIB008 , subgroup analysis BIB004 , or graphics and statistics for cardiology BIB021 . e SAMPL Guidelines provide general principles for reporting statistical methods and results BIB014 . SAMPL recommends to provide numbers with the appropriate degree of precision, the sample size, numerator and denominator for percentages, and mean (SD) (where SD � standard deviation) for data approximately normally distributed; otherwise medians and interpercentile ranges, verification of the assumption of statistical tests, name of the test and the tailed (one-or two-tailed), significance level (α), P values even statistically significant or not, adjustment(s) (if any) for multivariate analysis, statistical package used in the analysis, missing data, regression equation with regression coefficients for each explanatory variable, associated confidence intervals and P values, and models' goodness of fit (coefficient of determination) BIB014 . In regard to diagnostic tests, standards are available for reporting accuracy (QUADAS BIB001 , QUADAS-2 BIB009 , STARD BIB002 , and STARD 2015 ), diagnostic predictive models (TRIPOD BIB015 ), systematic reviews and meta-analysis (AMSTAR BIB005 and AMSTAR 2 BIB022 ), and recommendations and guidelines (AGREE , AGREE II , and RIGHT BIB019 ). e requirements highlight what and how to report (by examples), with an emphasis on the design of experiment which is mandatory to assure the validity and reliability of the reported results. Several studies have been conducted to evaluate if the available standards in reporting results are followed. e number of articles that adequately report the accuracy is reported from low BIB006 BIB011 BIB020 to satisfactory BIB012 , but not excellent, still leaving much room for improvements BIB016 BIB017 BIB023 . e diagnostic tests are frequently reported in the scientific literature, and the clinicians must know how a good report looks like to apply just the higher-quality information collected from the scientific literature to decision related to a particular patient. is review aimed to present the most frequent statistical methods used in the evaluation of a diagnostic test by linking the statistical treatment of data with the phase of the evaluation and clinical questions.
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> Background: Newer glucose meters are easier to use, but direct comparisons with older instruments are lacking. We wished to compare analytical performances of four new and four previous generation meters. Methods: On average, 248 glucose measurements were performed with two of each brand of meter on capillary blood samples from diabetic patients attending our outpatient clinic. Two to three different lots of strips were used. All measurements were performed by one experienced technician, using blood from the same sample for the meters and the comparison method (Beckman Analyzer 2). Results were evaluated by analysis of clinical relevance using the percentage of values within a maximum deviation of 5% from the reference value, by the method of residuals, by error grid analysis, and by the CVs for measurements in series. Results: Altogether, 1987 blood glucose values were obtained with meters compared with the reference values. By error grid analysis, the newer devices gave more accurate results without significant differences within the group (zone A, 98–98.5%). Except for the One Touch II (zone A, 98.5%), the other older devices were less exact (zone A, 87–92.5%), which was also true for all other evaluation procedures. Conclusions: New generation blood glucose meters are not only smaller and more aesthetically appealing but are more accurate compared with previous generation devices except the One Touch II. The performance of the newer meters improved but did not meet the goals of the latest American Diabetes Association recommendations in the hands of an experienced operator. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> Diagnostic tests are often much less rigorously evaluated than new drugs. It is time to ensure that the harms and benefits of new tests are fully understood ::: ::: No international consensus exists on the methods for assessing diagnostic tests. Previous recommendations stress that studies of diagnostic tests should match the type of diagnostic question.1 2 Once the specificity and sensitivity of a test have been established, the final question is whether tested patients fare better than similar untested patients. This usually requires a randomised trial. Few tests are currently evaluated in this way. In this paper, we propose an architecture for research into diagnostic tests that parallels the established phases in drug research. ::: ::: We have divided studies of diagnostic tests into four phases (box). We use research on brain natriuretic peptide for diagnosing heart failure as an illustrative example.2 However, the architecture is applicable to a wide range of tests including laboratory techniques, diagnostic imaging, pathology, evaluation of disability, electrodiagnostic tests, and endoscopy. ::: ::: In drug research, phase I studies deal with pharmacokinetics, pharmacodynamics, and safe doses.3 Phase I diagnostic studies are done to determine the range of results obtained with a newly developed test in healthy people. For example, after development of a test to measure brain natriuretic peptide in human plasma, phase I studies were done to establish the normal range of values in healthy participants.4 5 ::: ::: ![][1] ::: ::: The harms and benefits of diagnostic tests needs evaluating—just as drugs do ::: ::: Credit: GUSTO/SPL ::: ::: Diagnostic phase I studies must be large enough to examine the potential influence of characteristics such as sex, age, time of day, physical activity, and exposure to drugs. The studies are relatively quick, cheap, and easy to conduct, but they may occasionally raise ethical problems—for example, finding abnormal results in an apparently healthy person.6 … ::: ::: [1]: /embed/graphic-1.gif <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> BACKGROUND Multiple laboratory tests are used to diagnose and manage patients with diabetes mellitus. The quality of the scientific evidence supporting the use of these tests varies substantially. APPROACH An expert committee compiled evidence-based recommendations for the use of laboratory testing for patients with diabetes. A new system was developed to grade the overall quality of the evidence and the strength of the recommendations. Draft guidelines were posted on the Internet and presented at the 2007 Arnold O. Beckman Conference. The document was modified in response to oral and written comments, and a revised draft was posted in 2010 and again modified in response to written comments. The National Academy of Clinical Biochemistry and the Evidence-Based Laboratory Medicine Committee of the American Association for Clinical Chemistry jointly reviewed the guidelines, which were accepted after revisions by the Professional Practice Committee and subsequently approved by the Executive Committee of the American Diabetes Association. CONTENT In addition to long-standing criteria based on measurement of plasma glucose, diabetes can be diagnosed by demonstrating increased blood hemoglobin A 1c (HbA 1c ) concentrations. Monitoring of glycemic control is performed by self-monitoring of plasma or blood glucose with meters and by laboratory analysis of HbA 1c . The potential roles of noninvasive glucose monitoring, genetic testing, and measurement of autoantibodies, urine albumin, insulin, proinsulin, C-peptide, and other analytes are addressed. SUMMARY The guidelines provide specific recommendations that are based on published data or derived from expert consensus. Several analytes have minimal clinical value at present, and their measurement is not recommended. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> The history of the theory of reference values can be written as an unfinished symphony. The first movement, allegro con fuoco, played from 1960 to 1980: a mix of themes devoted to the study of biological variability (intra-, inter-individual, short- and long-term), preanalytical conditions, standardization of analytical methods, quality control, statistical tools for deriving reference limits, all of them complex variations developed on a central melody: the new concept of reference values that would replace the notion of normality whose definition was unclear. Additional contributions (multivariate reference values, use of reference limits from broad sets of patient data, drug interferences) conclude the movement on the variability of laboratory tests. The second movement, adagio, from 1980 to 2000, slowly develops and implements initial works. International and national recommendations were published by the IFCC-LM (International Federation of Clinical Chemistry and Laboratory Medicine) and scientific societies [French (SFBC), Spanish (SEQC), Scandinavian societies…]. Reference values are now topics of many textbooks and of several congresses, workshops, and round tables that are organized all over the world. Nowadays, reference values are part of current practice in all clinical laboratories, but not without difficulties, particularly for some laboratories to produce their own reference values and the unsuitability of the concept with respect to new technologies such as HPLC, GCMS, and PCR assays. Clinicians through consensus groups and practice guidelines have introduced their own tools, the decision limits, likelihood ratios and Reference Change Value (RCV), creating confusion among laboratorians and clinicians in substituting reference values and decision limits in laboratory reports. The rapid development of personalized medicine will eventually call for the use of individual reference values. The beginning of the second millennium is played allegro ma non-troppo from 2000 to 2012: the theory of reference values is back into fashion. The need to revise the concept is emerging. The manufacturers make a friendly pressure to facilitate the integration of Reference Intervals (RIs) in their technical documentation. Laboratorians are anxiously awaiting the solutions for what to do. The IFCC-LM creates Reference Intervals and Decision Limits Committee (C-RIDL) in 2005. Simultaneously, a joint working group IFCC-CLSI is created on the same topic. In 2008 the initial recommendations of IFCC-LM are revised and new guidelines are published by the Clinical and Laboratory Standards Institute (CLSI C28-A3). Fundamentals of the theory of reference values are not changed, but new avenues are explored: RIs transference, multicenter reference intervals, and a robust method for deriving RIs from small number of subjects. Concomitantly, other statistical methods are published such as bootstraps calculation and partitioning procedures. An alternative to recruiting healthy subjects proposes the use of biobanks conditional to the availability of controlled preanalytical conditions and of bioclinical data. The scope is also widening to include veterinary biology! During the early 2000s, several groups proposed the concept of 'Universal RIs' or 'Global RIs'. Still controversial, their applications await further investigations. The fourth movement, finale: beyond the methodological issues (statistical and analytical essentially), important questions remain unanswered. Do RIs intervene appropriately in medical decision-making? Are RIs really useful to the clinicians? Are evidence-based decision limits more appropriate? It should be appreciated that many laboratory tests represent a continuum that weakens the relevance of RIs. In addition, the boundaries between healthy and pathological states are shady areas influenced by many biological factors. In such a case the use of a single threshold is questionable. Wherever it will apply, individual reference values and reference change values have their place. A variation on an old theme! It is strange that in the period of personalized medicine (that is more stratified medicine), the concept of reference values which is based on stratification of homogeneous subgroups of healthy people could not be discussed and developed in conjunction with the stratification of sick patients. That is our message for the celebration of the 50th anniversary of Clinical Chemistry and Laboratory Medicine. Prospects are broad, enthusiasm is not lacking: much remains to be done, good luck for the new generations! <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> “Evidence-based medicine de-emphasises intuition, unsystematic clinical experience and pathophysiological rationale as sufficient grounds for clinical decision making and stresses the examination of evidence from clinical research.”1 <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> 38 The use of echocardiographic measurements to detect disease and predict outcomes can be confounded by a number of nondisease factors, including the effect of body size, that contribute to the variance of these measurements. The process of normal growth is associated with a nearly 200-fold increase in normal left ventricular enddiastolic volume (EDV) from premature infants up to large adolescents, making it imperative to account for changes in body size in pediatrics. Although this issue is often ignored in adult echocardiography, the sensitivity and specificity of parameters of left ventricular size are significantly improved when adjustment for body size in adults is performed. The article by Mawad et al. in this issue of JASE addresses an important aspect of this process, although it is likely a topic that is unfamiliar to most echocardiographers, even those in pediatric cardiology who rely heavily on Z scores. The concept of Z scores itself is often unfamiliar to adult echocardiographers. What is a Z score? Normative anatomic data are often presented as nomograms, a graphic representation of the mean values and one or more percentile curves, perhapsmost familiar to clinicians as the common height and weight curves used to track growth in children. Instead of percentiles, the distance from the mean value can be expressed as the standard deviation (SD), which has similar information content but is more easily interpreted for values outside the normal range. The number of SDs from the mean is termed the Z score, also known as the normal deviate or a standard score. Ameasurement that is 2 SDs above themean (the 97.7th percentile) has a Z score of 2, whereas a measurement that is 2 SDs below the mean (the 2.3rd percentile) has aZ score of 2. The use of Z scores dramatically simplifies clinical interpretation, because the clinician need not remember the age-specific or body surface area (BSA)–specific normal range for a variety of echocardiographic measurements. Instead, all variables have a mean of 0 and a normal range of 2 to +2, regardless of age or BSA. In addition, interpretation of change over time is simplified, because cardiovascular structures remain at the same Z score over time in normal subjects, despite changes in body size. Longitudinal change in the size of a cardiovascular structure that is more or less than expected for growth is easily recognized as change in Z score over time. Statistical analysis of research results is similarly facilitated and indeed enhanced by using Z scores. The analytic power added through the use of Z scores is often underappreciated and can be illustrated through an example. A common study design is to attempt to control for the effects of age and body size by selecting controls matched for age and body size and then performing a paired comparison of change over time in subjects versus controls. However, the sensitivity of this type of study design to detect therapeutic effects is diminished when there is significant variation in body size in the sample. Consider, for example, a hypothetical clinical trial of a therapy for dilated <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> A finding of high BMD on routine DXA scanning is not infrequent and most commonly reflects degenerative disease. However, BMD increases may also arise secondary to a range of underlying disorders affecting the skeleton. Although low BMD increases fracture risk, the converse may not hold for high BMD, since elevated BMD may occur in conditions where fracture risk is increased, unaffected or reduced. Here we outline a classification for the causes of raised BMD, based on identification of focal or generalized BMD changes, and discuss an approach to guide appropriate investigation by clinicians after careful interpretation of DXA scan findings within the context of the clinical history. We will also review the mild skeletal dysplasia associated with the currently unexplained high bone mass phenotype and discuss recent advances in osteoporosis therapies arising from improved understanding of rare inherited high BMD disorders. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> BACKGROUND: Biological covariates such as age and sex can markedly influence biochemical marker reference values, but no comprehensive study has examined such changes across pediatric, adult, and geriatric ages. The Canadian Health Measures Survey (CHMS) collected comprehensive nationwide health information and blood samples from children and adults in the household population and, in collaboration with the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER), examined biological changes in biochemical markers from pediatric to geriatric age, establishing a comprehensive reference interval database for routine disease biomarkers. METHODS: The CHMS collected health information, physical measurements, and biosamples (blood and urine) from approximately 12 000 Canadians aged 3–79 years and measured 24 biochemical markers with the Ortho Vitros 5600 FS analyzer or a manual microplate. By use of CLSI C28-A3 guidelines, we determined age- and sex-specific reference intervals, including corresponding 90% CIs, on the basis of specific exclusion criteria. RESULTS: Biochemical marker reference values exhibited dynamic changes from pediatric to geriatric age. Most biochemical markers required some combination of age and/or sex partitioning. Two or more age partitions were required for all analytes except bicarbonate, which remained constant throughout life. Additional sex partitioning was required for most biomarkers, except bicarbonate, total cholesterol, total protein, urine iodine, and potassium. CONCLUSIONS: Understanding the fluctuations in biochemical markers over a wide age range provides important insight into biological processes and facilitates clinical application of biochemical markers to monitor manifestation of various disease states. The CHMS-CALIPER collaboration addresses this important evidence gap and allows the establishment of robust pediatric and adult reference intervals. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> Purpose: The evaluation and management of male hypogonadism should be based on symptoms and on serum testosterone levels. Diagnostically this relies on accurate testing and reference values. Our objective was to define the distribution of reference values and assays for free and total testosterone by clinical laboratories in the United States.Materials and Methods: Upper and lower reference values, assay methodology and source of published reference ranges were obtained from laboratories across the country. A standardized survey was reviewed with laboratory staff via telephone. Descriptive statistics were used to tabulate results.Results: We surveyed a total of 120 laboratories in 47 states. Total testosterone was measured in house at 73% of laboratories. At the remaining laboratories studies were sent to larger centralized reference facilities. The mean ± SD lower reference value of total testosterone was 231 ± 46 ng/dl (range 160 to 300) and the mean upper limit was 850 ± 141 ng/dl (range 726 to 1,130).... <s> BIB009 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> OBJECTIVE ::: To observe the changes of complete blood count (CBC) parameters during pregnancy and establish appropriate reference intervals for healthy pregnant women. ::: ::: ::: METHODS ::: Healthy pregnant women took the blood tests at all trimesters. All blood samples were processed on Sysmex XE-2100. The following CBC parameters were analyzed: red blood cell count (RBC), hemoglobin (Hb), hematocrit (Hct), mean corpuscular volume (MCV), mean corpuscular hemoglobin (MCH), mean corpuscular hemoglobin concentration (MCHC), red blood cell distribution width (RDW), platelet count (PLT), mean platelet volume (MPV), platelet distribution width (PDW), white blood cell count (WBC), and leukocyte differential count. Reference intervals were established using the 2.5th and 97.5th percentile of the distribution. ::: ::: ::: RESULTS ::: Complete blood count parameters showed dynamic changes during trimesters. RBC, Hb, Hct declined at trimester 1, reaching their lowest point at trimester 2, and began to rise again at trimester 3. WBC, neutrophil count (Neut), monocyte count (MONO), RDW, and PDW went up from trimester 1 to trimester 3. On the contrary, MCHC, lymphocyte count (LYMPH), PLT, and MPV gradually descended during pregnancy. There were statistical significances in all CBC parameters between pregnant women and normal women, regardless of the trimesters (P<.001). The median obtained were (normal vs pregnancy) as follows: RBC 4.50 vs 3.94×1012 /L, Hb 137 vs 120 g/L, WBC 5.71 vs 9.06×109 /L, LYMPH% 32.2 vs 18.0, Neut% 58.7 vs 75.0, and PLT 251 vs 202×109 /L. ::: ::: ::: CONCLUSION ::: The changes of CBC parameters during pregnancy are described, and reference intervals for Beijing pregnant women are demonstrated in this study. <s> BIB010 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> AbstractReference Intervals (RIs) and clinical decision limits (CDLs) are a vital part of the information supplied by laboratories to support the interpretation of numerical clinical pathology resu... <s> BIB011 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> Abstract Background The Italian Society of Clinical Biochemistry (SIBioC) and the Italian Section of the European Ligand Assay Society (ELAS) have recently promoted a multicenter study (Italian hs-cTnI Study) with the aim to accurately evaluate analytical performances and reference values of the most popular cTnI methods commercially available in Italy. The aim of this article is to report the results of the Italian hs-cTnI Study concerning the evaluation of the 99th percentile URL and reference change (RCV) values around the 99th URL of the Access cTnI method. Materials and methods Heparinized plasma samples were collected from 1306 healthy adult volunteers by 8 Italian clinical centers. Every center collected from 50 to 150 plasma samples from healthy adult subjects. All volunteers denied the presence of chronic or acute diseases and had normal values of routine laboratory tests (including creatinine, electrolytes, glucose and blood counts). An older cohort of 457 adult subjects (mean age 63.0 years; SD 8.1 years, minimum 47 years, maximum 86 years) underwent also ECG and cardiac imaging analysis in order to exclude the presence of asymptomatic cardiac disease. Results and conclusions The results of the present study confirm that the Access hsTnI method using the DxI platform satisfies the two criteria required by international guidelines for high-sensitivity methods for cTn assay. Furthermore, the results of this study confirm that the calculation of the 99th percentile URL values are greatly affected not only by age and sex of the reference population, but also by the statistical approach used for calculation of cTnI distribution parameters. <s> BIB012 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> The aim was to elude differences in published paediatric reference intervals (RIs) and the implementations hereof in terms of classification of samples. Predicaments associated with transferring RIs published elsewhere are addressed. A local paediatric (aged 0 days to < 18 years) population of platelet count, haemoglobin level and white blood cell count, based on first draw samples from general practitioners was established. PubMed was used to identify studies with transferable RIs. The classification of local samples by the individual RIs was evaluated. Transference was done in accordance with the Clinical and Laboratory Standards Institute EP28-A3C guideline. Validation of transference was done using a quality demand based on biological variance. Twelve studies with a combined 28 RIs were transferred onto the local population, which was derived from 20,597 children. Studies varied considerably in methodology and results. In terms of classification, up to 63% of the samples would change classification from normal to diseased, depending on which RI was applied. When validating the transferred RIs, one RI was implementable in the local population. Conclusion: Published paediatric RIs are heterogeneous, making assessment of transferability problematic and resulting in marked differences in classification of paediatric samples, thereby potentially affecting diagnosis and treatment of children. ::: ::: ::: What is Known: ::: ::: • Reference intervals (RIs) are fundamental for the interpretation of paediatric samples and thus correct diagnosis and treatment of the individual child. ::: ::: • Guidelines for the establishment of adult RIs exist, but there are no specific recommendations for establishing paediatric RIs, which is problematic, and laboratories often implement RIs published elsewhere as a consequence. ::: ::: ::: What is New: ::: ::: • Paediatric RIs published in peer-reviewed scientific journals differ considerably in methodology applied for the establishment of the RI. ::: ::: • The RIs show marked divergence in the classification of local samples from healthy children. <s> BIB013 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> Abstract Study Design Descriptive normative. Introduction Intrinsic hand strength can be impacted by hand arthritis, peripheral nerve injuries, and spinal cord injuries. Grip dynamometry does not isolate intrinsic strength, and manual muscle testing is not sensitive to change in grades 4 and 5. The Rotterdam Intrinsic Hand Myometer is a reliable and valid test of intrinsic hand strength; however, no adult normative data are available. Purpose of the Study To describe age- and gender-stratified intrinsic hand strength norms in subjects aged 21 years and above and to determine if factors known to predict grip dynamometry also predict measures of intrinsic hand strength. Methods Three trials of 5 measures of maximal isometric intrinsic strength were performed bilaterally by 607 “healthy-handed” adult males and females. Average strength values were stratified by age and gender. Data were analyzed to determine the influence of demographic and anthropometric variables on intrinsic strength. Results Intrinsic strength generally followed age and gender trends similar to grip dynamometry. Age, gender, body mass index, and the interaction between gender and body mass index were predictors of intrinsic strength, whereas in most cases, the hand being tested did not predict the intrinsic strength. Discussion With the addition of these findings, age- and gender-stratified hand intrinsic strength norms now span from age 4 through late adulthood. Many factors known to predict grip dynamometry also predict intrinsic myometry. Additional research is needed to evaluate the impact of vocational and avocational demands on intrinsic strength. Conclusions These norms can be referenced to evaluate and plan hand therapy and surgical interventions for intrinsic weakness. <s> BIB014 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> Laboratory results interpretation for diagnostic accuracy and clinical decision-making in this period of evidence-based medicine requires cut-off values or reference ranges that are reflective of the geographical area where the individual resides. Several studies have shown significant differences between and within populations, emphasizing the need for population-specific reference ranges. This cross-sectional experimental study sought to establish the haematological reference values in apparently healthy individuals in three regions in Ghana. Study sites included Nkenkaasu, Winneba, and Nadowli in the Ashanti, Central, and Upper West regions of Ghana, respectively. A total of 488 healthy participants were recruited using the Clinical and Laboratory Standards Institute (United States National Consensus Committee on Laboratory Standards, NCCLS) Guidance Document C28A2. Medians for haematological parameters were calculated and reference values determined at and percentiles and compared with Caucasian values adopted by our laboratory as reference ranges and values from other African and Western countries. RBC count, haemoglobin, and haematocrit (HCT) were significantly higher in males compared to females. There were significant intraregional and interregional as well as international variations of haematological reference ranges in the populations studied. We conclude that, for each geographical area, there is a need to establish geography-specific reference ranges if accurate diagnosis and concise clinical decisions are to be made. <s> BIB015 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Anatomy of a Diagnostic Test <s> To determine Z-score equations and reference ranges for Doppler flow velocity indices of cardiac outflow tracts in normal fetuses. A prospective cross-sectional echocardiographic study was performed in 506 normal singleton fetuses from 18 to 40 weeks. Twelve pulsed-wave Doppler (PWD) measurements were derived from fetal echocardiography. The regression analysis of the mean and the standard deviation (SD) for each parameter were performed against estimated fetal weight (EFW) and gestational age (GA), in order to construct Z-score models. The correlation between these variables and fetal heart rate were also investigated. Strong positive correlations were found between the twelve PWD indices and the independent variables. A linear-quadratic regression model was the best description of the mean and SD of most parameters, with the exception of the velocity time interval (VTI) of ascending aorta against EFW, which was best fitted by a fractional polynomial. Z-score equations and reference values for PWD indices of fetal cardiac outflow tracts were proposed against GA and EFW, which may be useful for quantitative assessment of potential hemodynamic alternations, particularly in cases of intrauterine growth retardation and structural cardiac defects. <s> BIB016
A diagnostic test could be used in clinical settings for confirmation/exclusion, triage, monitoring, prognosis, or screening (Table 2) BIB005 . Table 2 presents the role of a diagnostic test, its aim, and a real-life example. Different statistical methods are used to support the results of a diagnostic test according to the question, phase, and study design. e statistical analysis depends on the test outcome type. Table 3 presents the most common types of diagnostic test outcome and provides some examples. e result of an excellent diagnostic test must be accurate (the measured value is as closest as possible by the true value) and precise (repeatability and reproducibility of the measurement) . An accurate and precise measurement is the primary characteristic of a valid diagnostic test. e reference range or reference interval and ranges of normal values determined in healthy persons are also essential to classify a measurement as a positive or negative result and generally refer to continuous measurements. Under the assumption of a normal distribution, the reference value of a diagnostic measurement had a lower reference limit/lower limit of normal (LRL) and an upper reference limit/upper limit of normal (URL) . Frequently, the reference interval takes the central 95% of a reference population, but exceptions from this rule are observed (e.g., cTn-cardiac troponins and glucose levels BIB001 with <5% deviation from reference intervals) BIB003 BIB012 . e reference ranges could be different among laboratories BIB009 BIB013 , genders and/or ages BIB014 , populations [79] (with variations inclusive within the same population BIB008 BIB015 ), and to physiological conditions (e.g., pregnancy BIB010 , time of sample collection, or posture). Within-subject biological variation is smaller than the between-subject variation, so reference change values could better reflect the changes in measurements for an individual as compared to reference Adapted from BIB002 . ranges BIB004 . Furthermore, a call for establishing the clinical decision limits (CDLs) with the involvement of laboratory professionals had also been emphasized BIB011 . e Z-score (standardized value, standardized score, or Z-value, Z-score � (measurement − μ)/σ)) is a dimensionless metric used to evaluate how many standard deviations (σ) a measurement is far from the population mean (μ) BIB006 . A Zscore of 3 refers to 3 standard deviations that would mean that more than 99% of the population was covered by the Zscore . e Z-score is properly used under the assumption of normal distribution and when the parameters of the population are known . It has the advantage that allows comparing different methods of measurements . e Z-scores are used on measurements on pediatric population 89] or fetuses BIB016 , but not exclusively (e.g., bone density tests BIB007 ).
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> A 36-item short-form (SF-36) was constructed to survey health status in the Medical Outcomes Study. The SF-36 was designed for use in clinical practice and research, health policy evaluations, and general population surveys. The SF-36 includes one multi-item scale that assesses eight health concepts: 1) limitations in physical activities because of health problems; 2) limitations in social activities because of physical or emotional problems; 3) limitations in usual role activities because of physical health problems; 4) bodily pain; 5) general mental health (psychological distress and well-being); 6) limitations in usual role activities because of emotional problems; 7) vitality (energy and fatigue); and 8) general health perceptions. The survey was constructed for self-administration by persons 14 years of age and older, and for administration by a trained interviewer in person or by telephone. The history of the development of the SF-36, the origin of specific items, and the logic underlying their selection are summarized. The content and features of the SF-36 are compared with the 20-item Medical Outcomes Study short-form. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Abstract This research develops and evaluates a simple method of grading the severity of chronic pain for use in general population surveys and studies of primary care pain patients. Measures of pain intensity, disability, persistence and recency of onset were tested for their ability to grade chronic pain severity in a longitudinal study of primary care back pain (n = 1213), headache (n = 779) and temporomandibular disorder pain (n = 397) patients. A Guttman scale analysis showed that pain intensity and disability measures formed a reliable hierarchical scale. Pain intensity measures appeared to scale the lower range of global severity while disability measures appeared to scale the upper range of global severity. Recency of onset and days in pain in the prior 6 months did not scale with pain intensity or disability. Using simple scoring rules, pain severity was graded into 4 hierarchical classes: Grade I, low disability-low intensity; Grade II, low disability-high intensity; Grade III, high disability-moderately limiting; and Grade IV, high disability-severely limiting. For each pain site, Chronic Pain Grade measured at baseline showed a highly statistically significant and monotonically increasing relationship with unemployment rate, pain-related functional limitations, depression, fair to poor self-rated health, frequent use of opioid analgesics, and frequent pain-related doctor visits both at baseline and at 1-year follow-up. Days in Pain was related to these variables, but not as strongly as Chronic Pain Grade. Recent onset cases (first onset within the prior 3 months) did not show differences in psychological and behavioral dysfunction when compared to persons with less recent onset. Using longitudinal data from a population-based study (n = 803), Chronic Pain Grade at baseline predicted the presence of pain in the prior 2 weeks, Chronic Pain Grade and pain-related functional limitations at 3-year follow-up. Grading chronic pain as a function of pain intensity and pain-related disability may be useful when a brief ordinal measure of global pain severity is required. Pain persistence, measured by days in pain in a fixed time period, provides useful additional information. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> AIMS ::: The purpose of this study was to develop and validate an easily used disease-specific quality of life (QOL) measure for patients with chronic lower limb ischemia and to design an evaluative instrument, responsive to within-subject change, that adds to clinical measures of outcome when comparing treatment options in the management of lower limb ischemia. ::: ::: ::: METHODS ::: The first phase consisted of item generation, item reduction, formulating, and pretesting in patients with ischemia. The proportion of patients who selected an item as troublesome and the mean importance they attached to it were combined to give a clinical impact factor. Items with the highest clinical impact factor were used to formulate a new 25-item questionnaire that was then pretested in 20 patients with lower limb ischemia. In the second phase, reliability, validity, and responsiveness of the new questionnaire were assessed in 39 patients with lower limb ischemia who were tested at 0 and 4 weeks. The King's College Hospital's Vascular Quality of Life Questionnaire and the Short-Form 36 were administered at each visit, and treadmill walking distance and ankle/brachial pressure indices were recorded. The new questionnaire's reliability, internal consistency, responsiveness, and validity were determined. ::: ::: ::: RESULTS ::: Areas of QOL impairment were consistent through the ranges of disease severity and age, with no apparent differences between the men and women. Therefore, a single questionnaire is applicable to all patients with chronic lower limb ischemia. In stable patients test-retest scores demonstrated a reliability of r more than 0.90. Each item had internal consistency (item-domain Cronbach alpha =.7-.9). The questionnaire was responsive to change, with correlation between change in the questionnaire's total score and both global and clinical indicators of change (P <.001). The questionnaire showed face and construct validity. ::: ::: ::: CONCLUSIONS ::: This disease-specific questionnaire is reliable, responsive, valid, and ready for use as an outcome measure in clinical trials. It is sensitive to the concerns of patients with lower limb ischemia, offering a simple method to measure the effect of interventions on their QOL. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Objective: To determine normal values of plasma B type natriuretic peptide from infancy to adolescence using a commercially available rapid assay. ::: ::: Setting: Tertiary referral centre. ::: ::: Design: The study was cross sectional. Plasma BNP concentration was measured in 195 healthy infants, children, and adolescents from birth to 17.6 years using the triage BNP assay (a fluorescence immunoassay). ::: ::: Results: During the first week of life, the mean (SD) plasma concentration of BNP in newborn infants decreased significantly from 231.6 (197.5) to 48.4 (49.1) pg/ml (p = 0.001). In all subjects older than two weeks plasma BNP concentration was less than 32.7 pg/ml. There was no significant difference in mean plasma BNP measured in boys and girls younger than 10 years (8.3 (6.9) v 8.5 (7.5) pg/ml). In contrast, plasma concentration of BNP in girls aged 10 years or older was significantly higher than in boys of the same age group (12.1 (9.6) v 5.1 (3.5) pg/ml, p < 0.001). Plasma BNP concentrations were higher in pubertal than in prepubertal girls (14.4 (9.7) v 7.1 (6.6) pg/ml, p < 0.001) and were correlated with the Tanner stage (r = 0.41, p = 0.001). ::: ::: Conclusions: Plasma BNP concentrations in newborn infants are relatively high, vary greatly, and decrease rapidly during the first week of life. In children older than 2 weeks, the mean plasma concentration of BNP is lower than in adults. There is a sex related difference in the second decade of life, with higher BNP concentrations in girls. BNP concentrations in girls are related to pubertal stage. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Quality of life may be considerably reduced in patients who are suffering from chronic lower limb venous insufficiency, although existing generic quality of life instruments (NHP, SF-36 or SIP) cannot completely identify their specific complaints. The Chronic Venous Insufficiency Questionnaire (CIVIQ) has been developed by iterative process. First, a pilot group of 20 patients was used to identify a number of important features of quality of life affected by venous insufficiency, other than physical symptoms of discomfort. A second study involving 2,001 subjects was used to reduce the number of items. Subjects were asked to score both the severity of their problems and the importance they attributed to each problem on a 5-point Likert scale. The importance items found in patients with venous insufficiency were subjected to factorial analyses (PCA, PAF). The final version is a 20-item self-administered questionnaire which explores four dimensions: psychological, physical and social functioning and pain. Internal consistency of the questionnaire was validated for each dimension (Cronbach's alpha > 0.820 for three out of four factors). Reproducibility was confirmed in a 60 patient test-retest study. Pearson's correlation coefficients for both the four dimension subscales and for the global score at 2-week intervals were greater than 0.940. Finally, the questionnaire was tested in a randomized clinical trial of 934 patients in order to assess responsiveness and the convergent validity of the instrument, together with the patient's own quality of life. This study demonstrated that convergence was valid: Pearson's correlation coefficients between clinical score differences and quality of life score differences were small (from 0.199–0.564) but were statistically different from 0 (p 0.80). Reliability, face, content, construct validity and responsiveness were also determined for this specific quality of life questionnaire relating to venous insufficiency. Results suggest that this questionnaire may be used with confidence to assess quality of life in clinical trials on chronic venous insufficiency. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Renal Doppler resistance index measurement may represent a clinically useful noninvasive method for early detection of occult hemorrhagic shock. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> OBJECTIVES ::: To improve glycemic control and prevent late complications, the patient and diabetes team need to adjust insulin therapy. The aim of this study is to evaluate the efficacy of thrice-daily versus twice-daily insulin regimens on HbA1c for type 1 diabetes mellitus by a randomized controlled trial in Hamedan, west of Iran. ::: ::: ::: METHODS ::: The study included 125 patients under 19 years of age with type 1 diabetes mellitus over a 3-month period. All patients with glycohemoglobin (HbA1c) ≥8% were followed prospectively and randomized into two trial and control groups. The control group received conventional two insulin injections per day: a mixture of short-acting (regular) + intermediated acting (NPH) insulins pre-breakfast (twice daily), and the trial group was treated by an extra dose of regular insulin before lunch (three times daily). Main outcome measure was HbA1c at baseline and at the end of 3 months. The mean blood glucose level and number of hypoglycemia were recorded. All patients underwent monthly intervals follow up for assessing their home blood glucose records and insulin adjustment. ::: ::: ::: RESULTS ::: Overall, 100 patients completed the study protocol. 52% were females, mean ±SD of age of 12.91 ± 3.9 years. There were no significant differences in baseline characteristics including age, gender, pubertal stage, adherence to diet, duration of disease and total daily insulin dose (p>0.05). There was a significant decrease individually in both groups in HbA1c level (p<0.05), but there was no significant difference in HbA1c reduction in patients on twice-daily insulin injections and those on thrice-daily insulin injection groups (1.12 ± 2.12 and 0.98±2.1% respectively, p>0.05). ::: ::: ::: CONCLUSION ::: Compared with twice daily insulin, a therapeutic regimen involving the addition of one dose regular insulin before lunch caused no significant change in the overall glycemic control of patients with type 1 diabetes mellitus. Our results emphasize that further efforts for near normoglycemia should be focused upon education of patients in terms of frequent outpatient visits, more blood glucose monitoring and attention to insulin adjustments. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> A number of case–control studies were conducted to investigate the association of IL6 gene polymorphisms with colorectal cancer (CRC). However, the results were not always consistent. We performed a systematic review and meta-analysis to examine the association between the IL6 gene polymorphisms and CRC. Data were collected from the following electronic databases: PubMed, EMBASE, Web of Science, BIOSIS Previews, HuGENet, and Chinese Biomedical Literature Database, with the last report up to July 2011. A total of 17 studies involving 4 SNPs were included (16 for rs1800795, 2 for rs1800796, 2 for rs1800797, and 1 for rs13306435). Overall, no significant association of these polymorphisms with CRC was found in heterozygote comparisons as well as homozygote comparison, dominant genetic model and recessive model. In subgroup analysis, among studies using population-based controls, fulfilling Hardy–Weinberg equilibrium, or using Taqman genotyping method, we did not find any significant association. However, the rs1800795 C allele was significantly associated with reduced risk for CRC among persons who regularly or currently took NSAIDs (four studies, OR = 0.750; 95 % CI, 0.64–0.88; P = 0.474 for heterogeneity test), and with increased risk for CRC among persons who drank (one study, OR = 1.97; 95 % CI, 1.32–2.94). Individuals with the rs1800795 C allele in the IL6 gene have a significantly lower risk of CRC, but in the setting of NSAIDs use. Further studies are merited to assess the association between the IL6 gene polymorphisms and CRC risk among persons who take NSAIDs, drink or smoke, etc. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Abstract Sporeforming bacteria are ubiquitous in the environment and exhibit a wide range of diversity leading to their natural prevalence in foodstuff. The state of the art of sporeformer prevalence in ingredients and food was investigated using a multiparametric PCR-based tool that enables simultaneous detection and identification of various genera and species mostly encountered in food, i.e. Alicyclobacillus , Anoxybacillus flavithermus , Bacillus , B. cereus group, B. licheniformis , B. pumilus , B. sporothermodurans , B. subtilis , Brevibacillus laterosporus , Clostridium , Geobacillus stearothermophilus , Moorella and Paenibacillus species. In addition, 16S rDNA sequencing was used to extend identification to other possibly present contaminants. A total of 90 food products, with or without visible trace of spoilage were analysed, i.e. 30 egg-based products, 30 milk and dairy products and 30 canned food and ingredients. Results indicated that most samples contained one or several of the targeted genera and species. For all three tested food categories, 30 to 40% of products were contaminated with both Bacillus and Clostridium . The percentage of contaminations associated with Clostridium or Bacillus represented 100% in raw materials, 72% in dehydrated ingredients and 80% in processed foods. In the last two product types, additional thermophilic contaminants were identified ( A. flavithermus , Geobacillus spp., Thermoanaerobacterium spp. and Moorella spp.). These results suggest that selection, and therefore the observed (re)-emergence of unexpected sporeforming contaminants in food might be favoured by the use of given food ingredients and food processing technologies. <s> BIB009 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Background: Interleukins, interferons and oxidative DNA products are important biomarkers assessing the inflammations and tissue damages caused by toxic materials in the body. We tried to evaluate distributions, reference values and age related changes of blood levels of inflammatory cytokines, C-reactive protein (CRP), IgE and urine levels of 8-hydroxy-2 ′ -deoxyguanosine (8-OHdG) among workers in a cohort study evaluating the health influences of toner particles. Methods: A total of 1366 male workers under age 50 years (age 19 – 49 years; 718 exposed and 648 not exposed to toner particles) in a cross sectional study of 1614 (categorized as 809 exposed and 805 not exposed, age 19 – 59 years) workers in a photocopier company has been followed prospectively as the cohort. Blood levels of interleukin (IL)-4, IL-6, IL-8, interferon- γ (IFN- γ ), CRP, IgE and urine 8-OHdG were measured annually for 5 years. Results: Reference values of the biomarkers are; CRP: 0.01 – 0.63 × 10 -2 g/L, IgE: 6 – 1480 IU/mL, IL-4: 2.6 – 76.1 pg/mL, IL-6: 0.4 – 4.9 pg/mL and 8-OHdG: 1.5 – 8.2 ng/mgCr. We could not evaluate reference values for IL-8 and IFN- γ because most of the values were below the sensitivity limits (2.0 pg/mL and 0.1 IU/mL, respectively). There were no differences of the biomarker levels between the toner exposed and the control workers. We observed a statistically significant age related decrease of serum IL-4 levels. Conclusions: This is the first report assessing the distributions and reference values of inflammatory biomarker levels in a large scaled cohort. We observed age related changes of some of the biomarkers. We could not detect any differences of the studied biomarker values between the toner exposed and the control workers. <s> BIB010 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Latent tubercular infection (LTBI) in children, as in adults, lacks a diagnostic gold standard. Until some time ago the only available test for the diagnosis of LTBI was the tuberculin skin test (TST) but it has drawbacks such as poor sensitivity and specificity (Pai et al., 2008). QuantiFERON-TB Gold In-Tube (QFT-IT) has been approved for clinical use by the Food and Drug Administration and its major benefit is a high specificity even in BCG-vaccinated subjects (Pai et al., 2008). The performance of QFT-IT in children has not been extensively explored but preliminary data suggest that it performs better than TST (Pai et al., 2008). Few studies present <s> BIB011 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> PURPOSE ::: To summarize the results of a 4-year period in which endorectal magnetic resonance imaging (MRI) was considered for all men referred for salvage radiation therapy (RT) at a single academic center; to describe the incidence and location of locally recurrent disease in a contemporary cohort of men with biochemical failure after radical prostatectomy (RP), and to identify prognostic variables associated with MRI findings in order to define which patients may have the highest yield of the study. ::: ::: ::: METHODS AND MATERIALS ::: Between 2007 and 2011, 88 men without clinically palpable disease underwent eMRI for detectable prostate-specific antigen (PSA) after RP. The median interval between RP and eMRI was 32 months (interquartile range, 14-57 months), and the median PSA level was 0.30 ng/mL (interquartile range, 0.19-0.72 ng/mL). Magnetic resonance imaging scans consisting of T2-weighted, diffusion-weighted, and dynamic contrast-enhanced imaging were evaluated for features consistent with local recurrence. The prostate bed was scored from 0-4, whereby 0 was definitely normal, 1 probably normal, 2 indeterminate, 3 probably abnormal, and 4 definitely abnormal. Local recurrence was defined as having a score of 3-4. ::: ::: ::: RESULTS ::: Local recurrence was identified in 21 men (24%). Abnormalities were best appreciated on T2-weighted axial images (90%) as focal hypointense lesions. Recurrence locations were perianastomotic (67%) or retrovesical (33%). The only risk factor associated with local recurrence was PSA; recurrence was seen in 37% of men with PSA >0.3 ng/mL vs 13% if PSA ≤0.3 ng/mL (P<.01). The median volume of recurrence was 0.26 cm(3) and was directly associated with PSA (r=0.5, P=.02). The correlation between MRI-based tumor volume and PSA was even stronger in men with positive margins (r=0.8, P<.01). ::: ::: ::: CONCLUSIONS ::: Endorectal MRI can define areas of local recurrence after RP in a minority of men without clinical evidence of disease, with yield related to PSA. Further study is necessary to determine whether eMRI can improve patient selection and success of salvage RT. <s> BIB012 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> BackgroundThe importance of apolipoprotein E (APOE) in lipid and lipoprotein metabolism is well established. However, the impact of APOE polymorphisms has never been investigated in an Algerian population. This study assessed, for the fist time, the relationships between three APOE polymorphisms (epsilon, rs439401, rs4420638) and plasma lipid concentrations in a general population sample from Algeria.MethodsThe association analysis was performed in the ISOR study, a representative sample of the population living in Oran (787 subjects aged between 30 and 64). Polymorphisms were considered both individually and as haplotypes.ResultsIn the ISOR sample, APOE ϵ4 allele carriers had higher plasma triglyceride (p=0.0002), total cholesterol (p=0.009) and LDL-cholesterol (p=0.003) levels than ϵ3 allele carriers. No significant associations were detected for the rs4420638 and rs439401 SNPs. Linkage disequilibrium and haplotype analyses confirmed the respectively deleterious and protective impacts of the ϵ4 and ϵ2 alleles on LDL-cholesterol levels and showed that the G allele of the rs4420638 polymorphism may exert a protective effect on LDL-cholesterol levels in subjects bearing the APOE epsilon 4 allele.ConclusionOur results showed that (i) the APOE epsilon polymorphism has the expected impact on the plasma lipid profile and (ii) the rs4420638 G allele may counterbalance the deleterious effect of the ϵ4 allele on LDL-cholesterol levels in an Algerian population. <s> BIB013 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Background Most commonly used outcome measures in peripheral arterial disease (PAD) provide scarce information about achieved patient benefit. Therefore, patient-reported outcome measures have become increasingly important as complementary outcome measures. The abundance of items in most health-related quality of life instruments makes everyday clinical use difficult. This study aimed to develop a short version of the 25-item Vascular Quality of Life Questionnaire (VascuQoL-25), a PAD-specific health-related quality of life instrument. Methods The study recruited 129 individuals with intermittent claudication and 71 with critical limb ischemia from two university hospitals. Participants were a mean age of 70 ± 9 years, and 57% were men. All patients completed the original VascuQoL when evaluated for treatment, and 127 also completed the questionnaire 6 months after a vascular procedure. The VascuQoL-25 was reduced based on cognitive interviews and psychometric testing. The short instrument, the VascuQoL-6, was tested using item-response theory, exploring structure, precision, item fit, and targeting. A subgroup of 21 individuals with intermittent claudication was also tested correlating the results of VascuQoL-6 to the actual walking capacity, as measured using global positioning system technology. Results On the basis of structured psychometric testing, the six most informative items were selected (VascuQoL-6) and tested vs the original VascuQoL-25. The correlation between VascuQoL-25 and VascuQoL-6 was r = 0.88 before intervention, r = 0.96 after intervention, and the difference was r = 0.91 ( P r = 0.72; P Conclusions VascuQoL-6 is a valid and responsive instrument for the assessment of health-related quality of life in PAD. The main advantage is the compact format that offers a possibility for routine use in busy clinical settings. <s> BIB014 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> The purpose of this study was to determine the usefulness of color doppler sonography and resistivity index (RI) in differentiating liver tumors. The study was carried out in the Department of Radiology and Imaging, Mymensingh Medical College Hospital, and Institute of Nuclear Medicine and Allied Sciences (INMAS), Mymensingh, Bangladesh, during the period of July 2009 to June 2011. Total 50 consecutive cases were studied. Among them 27 were hepatocellular carcinomas, 19 were metastatic tumors, 03 were hemangiomas and 01 was hepatic adenoma. Doppler sonographic findings were then correlated, case by case, with final diagnosis- either pathologically by USG guided Fine-needle aspiration or by other imaging modalities (e.g., CT scan and RBC liver scan for hepatic hemangioma). The RI value of hepatocellular carcinoma was 0.69±0.096 and in metastatic tumors 0.73±0.079. The results showed no significant difference between the RI of hepatocellular carcinomas and metastatic liver tumors but it was significantly higher than benign lesions (p<0.05). RI of hemangiomas was 0.49±0.64 and in one hepatic adenoma was 0.65. When RI was <0.6 for benign liver tumors and ≥0.6 for malignant tumors we calculated a sensitivity of 89.14%, specificity of 66.7%, accuracy of 85.71% positive predictive value of 97.62% and negative predictive value of 28.57% in differentiating benign and malignant tumors. Thirty four of 46(73.9%) malignant lesions had intratumoral flow and 25% of benign lesions also showed intratumoral flow. The difference of intratumoral flow between malignant and benign lesions was significant (p<0.01). Two of 4 benign lesions (50%) had peritumoral vascularity where 6% of the malignant tumors showed peritumoral vascularity. In conclusion, combined studies of the type of intra-and peri-tumoral flow signals in CDFI and the parameter of RI would be more helpful in the differential diagnosis of benign and malignant liver tumors. <s> BIB015 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> This article discusses the rationale of the Society of Radiologists in Ultrasound 2012 consensus statement and the new discriminatory values for visualization of a gestational sac, a yolk sac, an embryo, and cardiac activity; reviews normal US findings and those that are suspicious for or definitive of pregnancy failure in the early first trimester; and describes the implications of “pregnancy of unknown location” and “pregnancy of unknown viability.” <s> BIB016 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> ObjectiveTo assess the ability of Glasgow Aneurysm Score in predicting postoperative mortality for ruptured aortic aneurysm which may assist in decision making regarding the open surgical repair of an individual patient.MethodsA total of 121 patients diagnosed of ruptured abdominal aortic aneurysm who underwent open surgery in our hospital between 1999 and 2013 were included. The Glasgow Aneurysm Score for each patient was graded according to the Glasgow Aneurysm Score (Glasgow Aneurysm Score = age in years + 17 for shock + 7 for myocardial disease + 10 for cerebrovascular disease + 14 for renal disease). The groups were divided as Group 1 (containing the patients who died) and Group 2 (the patients who were discharged). The Glasgow Aneurysm Scores amongst the groups were compared.ResultsOut of 121 patients, 108 (89.3%) were males and 13 (10.7%) were females. The in-hospital mortality was 48 patients (39.7%). The Glasgow Aneurysm Score was 84.15 ± 15.94 in Group 1 and 75.14 ± 14.67 in Group 2 which reveal... <s> BIB017 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Abstract Anti-smoking legislation has been associated with an improvement in health indicators. Since the cadmium (Cd) body burden in the general population is markedly increased by smoke exposure, we analyzed the impact of the more restrictive legislation that came into force in Spain in 2011 by measuring Cd and cotinine in first morning urine samples from 83 adults in Madrid (Spain) before (2010) and after (2011) introduction of this law. Individual pair-wise comparisons showed a reduction of creatinine corrected Cotinine and Cd levels for non-active smokers, i. e. those which urinary cotinine levels are below 50 μg/L. After the application of the stricter law, cotinine levels in urine only decreased in non-active smokers who self-reported not to be exposed to second-hand smoke. The reduction in second hand smoke exposure was significantly higher in weekends (Friday to Sunday) than in working days (Monday to Thursday). The decrease in U-Cd was highly significant in non-active smokers and, in general, correlated with lower creatinine excretion. Therefore correction by creatinine could bias urinary Cd results, at least for cotinine levels higher than 500 μg/L. The biochemical/toxicological benefits detected herein support the stricter application of anti-smoking legislation and emphasize the need to raise the awareness of the population as regards exposure at home. <s> BIB018 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Aim : To study the usefulness of color or power Doppler ultrasound (US) in the pre-surgical evaluation of skin melanoma, and to correlate the Doppler characteristics with the appearance on high frequency ultrasound strain elastography (SE) in the preoperative evaluation of cutaneous melanoma. Materials and method : The study included 42 cutaneous melanoma lesions in 39 adult subjects examined between September 2011 and January 2015. Doppler US features (the presence and aspect of vascularization, and the number of vascular pedicles) and elasticity by strain elastography were evaluated together with the pathological results. Results : The melanoma lesions presented hyper-vascularization, with multiple vascular pedicles and stiff appearance. Significant correlations between the thickness of the tumor, measured histopathologically by the Breslow index, and the degree of vascularization (p=0.0167), and number of vascular pedicles (p=0.0065) were identified. Strong correlations between the SE appearance and vascularization on one hand, and SE and the number of vascular pedicles were also identified (p<0.001). Conclusion : Our study demonstrates that Doppler US and SE offer useful information for THE preoperative evaluation of cutaneous melanoma and may contribute to better defining the long term prognosis. <s> BIB019 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> The aim of the study was to assess the use of echocardiographic measurements in newborns of diabetic mothers. Maternal diabetes is associated with an increased risk of morbidity and mortality in pregnancy and in perinatal period. Thirty-five newborns of diabetic mothers (pre- gestational or gestational diabetes; case group) and thirty-five controls (control group), born between January 2009 and December 2012 in Cluj-Napoca (north-west of Romania), were included in this study. A Logiq e ultrasound with an 8 MHz transducer was used to measure echocardiographic parameters. The interventricular septal thickness in case group was higher as compared with control group (at end systole = 6.61 ± 1.64 mm vs. 5.75 ± 0.95 mm, p = 0.0371; at end diastole = 4.61 ± 1.59 mm vs. 3.42 ± 0.70 mm, p = 0.0001). A risk ratio of 2.333 (0.656, 8.298) was obtained for septal hypertrophy. A higher proportion of septal hypertrophy was identified in the newborns of mothers with gestational diabetes compared to the newborns of pregestational diabetes mothers (p = 0.0058). The mean birth weight was significantly higher in newborns of diabetic mothers (3695.57 ± 738.63) as compared with controls (3276.14 ± 496.51; p = 0.0071). Infants born to mothers with diabetes proved to be at a high risk of septal hypertrophy. <s> BIB020 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> OBJECTIVES ::: Biomarker research is an important area of investigation in Gaucher disease, caused by an inherited deficiency of a lysosomal enzyme, glucocerebrosidase. We evaluated the usefulness of neopterin, as a novel biomarker reflecting chronic inflammation and immune system activation in Gaucher disease and analysed its evolution in response to enzyme replacement therapy (ERT). ::: ::: ::: METHODS ::: Circulating plasma neopterin levels in 31 patients with non-neuronopathic Gaucher disease were measured before and after the onset of ERT and were compared with those of 18 healthy controls. Plasma chitotriosidase activity was also monitored, as a reference biomarker, against which we evaluated the evolution of neopterin. ::: ::: ::: RESULTS ::: Neopterin levels were significantly increased in treatment-naïve patients (mean 11.90 ± 5.82 nM) compared with controls (6.63 ± 5.59 nM, Mann-Whitney U test P = 0.001), but returned to normal levels (6.92 ± 4.66 nM) following ERT. Investigating the diagnostic value of neopterin by receiver operating characteristic analysis, we found a cut-off value of 7.613 nM that corresponds to an area under the curve of 0.780 and indicates a good discrimination capacity, with a sensitivity of 0.774 and a specificity of 0.778. ::: ::: ::: DISCUSSION ::: Our results suggest that measurement of circulating neopterin may be considered as a novel test for the confirmation of diagnosis and monitoring of the efficacy of therapeutic intervention in Gaucher disease. Plasma neopterin levels reflect the global accumulation and activation of Gaucher cells and the extent of chronic immune activation in this disorder. ::: ::: ::: CONCLUSION ::: Neopterin may be an alternative storage cell biomarker in Gaucher disease, especially in chitotriosidase-deficient patients. <s> BIB021 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Statistical editors of the Malaysian Journal of Medical Sciences (MJMS) must go through many submitted manuscripts, focusing on the statistical aspect of the manuscripts. However, the editors notice myriad styles of reporting the statistical results, which are not standardised among the authors. This could be due to the lack of clear written instructions on reporting statistics in the guidelines for authors. The aim of this editorial is to briefly outline reporting methods for several important and common statistical results. It will also address a number of common mistakes made by the authors. The editorial will serve as a guideline for authors aiming to publish in the MJMS as well as in other medical journals. <s> BIB022 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> BACKGROUND ::: Chitotriosidase is an enzyme secreted by activated macrophages. This study aims to investigate the usefulness of circulating chitotriosidase activity as a marker of inflammatory status in patients with critical limb ischemia (CLI). ::: ::: ::: MATERIALS AND METHODS ::: An observational gender-matched case-control study was conducted on patients hospitalized with the primary diagnosis of CLI, as well as a control group. The control group consisted of healthy volunteers. ::: ::: ::: RESULTS ::: Forty-three patients were included in each group. Similar demographic characteristics (median age of 60-62 years and overweight) were observed in both groups. Chitotriosidase activity ranged from 110 nmol/ml/hr to 1530 nmol/ml/hr in the CLI group and from 30 nmol/ml/hr to 440 nmol/ml/hr in the control group; demonstrating significantly elevated values in the CLI group (p<0.001). Median plasma chitotriosidase activity was significantly elevated in smokers compared with non-smokers in both groups (p<0.05). However, this activity had higher values in CLI than in control subjects. Receiver operating characteristic (ROC) analysis was then performed in order to verify the diagnostic accuracy of chitotriosidase as an inflammatory biomarker in CLI. ::: ::: ::: CONCLUSION ::: Circulating chitotriosidase is a test which can potentially be used for the monitoring of CLI patients without other inflammatory conditions. However, the interpretation of elevated values must take into account the inflammatory response induced by tobacco exposure. <s> BIB023 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Purpose To assess the accuracy of staging positron emission tomography (PET)/computed tomography (CT) in detecting distant metastasis in patients with local-regionally advanced cervical and high-risk endometrial cancer in the clinical trial by the American College of Radiology Imaging Network (ACRIN) and the Gynecology Oncology Group (GOG) (ACRIN 6671/GOG 0233) and to compare central and institutional reader performance. Materials and Methods In this prospective multicenter trial, PET/CT and clinical data were reviewed for patients enrolled in ACRIN 6671/GOG 0233. Two central readers, blinded to site read and reference standard, reviewed PET/CT images for distant metastasis. Central review was then compared with institutional point-of-care interpretation. Reference standard was pathologic and imaging follow-up. Test performance for central and site reviews of PET/CT images was calculated and receiver operating characteristic analysis was performed. Generalized estimating equations and nonparametric bootstrap procedure for clustered data were used to assess statistical significance. Results There were 153 patients with cervical cancer and 203 patients with endometrial cancer enrolled at 28 sites. Overall prevalence of distant metastasis was 13.7% (21 of 153) for cervical cancer and 11.8% (24 of 203) for endometrial cancer. Central reader PET/CT interpretation demonstrated sensitivity, specificity, positive predictive value (PPV), and negative predictive value of 54.8%, 97.7%, 79.3%, and 93.1% for cervical cancer metastasis versus 64.6%, 98.6%, 86.1%, and 95.4% for endometrial cancer, respectively. By comparison, local institutional review demonstrated sensitivity, specificity, PPV, and negative predictive value of 47.6%, 93.9%, 55.6%, and 91.9% for cervical cancer metastasis and 66.7%, 93.9%, 59.3%, and 95.5% for endometrial cancer, respectively. For central readers, the specificity and PPV of PET/CT detection of cervical and endometrial cancer metastases were all significantly higher compared with that of local institutional review (P < .05). Central reader area under the receiver operating characteristic curve (AUC) values were 0.78 and 0.89 for cervical and endometrial cancer, respectively; these were not significantly different from local institutional AUC values (0.75 and 0.84, respectively; P > .05 for both). Conclusion FDG PET/CT demonstrates high specificity and PPV for detecting distant metastasis in cervical and endometrial cancer and should be included in the staging evaluation. Blinded central review of imaging provides improved specificity and PPV for the detection of metastases and should be considered for future oncologic imaging clinical trials. © RSNA, 2017. <s> BIB024 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Descriptive Metrics. <s> Screening in women has decreased the incidence and mortality of cervical cancer. Precancerous cervical lesions (cervical intraepithelial neoplasias) and cervical carcinomas are strongly associated with sexually-transmitted high-risk human papillomavirus (HPV) infection, which causes more than 99% of cervical cancers. Screening methods include cytology (Papanicolaou test) and HPV testing, alone or in combination. The American Academy of Family Physicians and the U.S. Preventive Services Task Force recommend starting screening in immunocompetent, asymptomatic women at 21 years of age. Women 21 to 29 years of age should be screened every three years with cytology alone. Women 30 to 65 years of age should be screened every five years with cytology plus HPV testing or every three years with cytology alone. Screening is not recommended for women younger than 21 years or in women older than 65 years with an adequate history of negative screening results. The U.S. Preventive Services Task Force is in the process of updating its guidelines. In 2015, the American Society for Colposcopy and Cervical Pathology and the Society of Gynecologic Oncology published interim guidance for the use of primary HPV testing. <s> BIB025
A cohort cross-sectional study is frequently used to establish the normal range of values. Whenever data follow the normal distribution (normality Renal Doppler resistive index: hemorrhagic shock in polytrauma patients BIB006 Monitoring A repeated test that allows assessing the efficacy of an intervention Glycohemoglobin (A1c Hb): overall glycemic control of patients with diabetes BIB007 Prognosis Assessment of an outcome or the disease progression PET/CT scan in the identification of distant metastasis in cervical and endometrial cancer BIB024 Screening Presence of the disease in apparently asymptomatic persons Cytology test: screening of cervical uterine cancer BIB025 Positive/negative or abnormal/normal (i) Endovaginal ultrasound in the diagnosis of normal intrauterine pregnancy BIB016 (ii) QuantiFERON-TB test for the determination of tubercular infection BIB011 Qualitative ordinal (i) Prostate bed after radiation therapy: definitely normal/probably normal/uncertain/ probably abnormal/definitely abnormal BIB012 (ii) Scores: Apgar score (assessment of infants after delivery): 0 (no activity, pulse absent, floppy grimace, skin blue or pale, and respiration is absent) to 10 (active baby, pulse over 100 bps, prompt response to stimulation, pink skin, and vigorous cry) ; Glasgow coma score: eye opening (from 1 � no eye opening to 4 � spontaneously), verbal response (from 1 � none to 5 � patient oriented), and motor response (from 1 � none to 6 � obeys commands) ; Alvarado score (the risk of appendicitis) evaluates 6 clinical items and 2 laboratory measurements and had an overall score from 0 (no appendicitis) to 10 ("very probable" appendicitis) ; and sonoelastographic scoring systems in evaluation of lymph nodes (iii) Scales: quality-of-life scales (SF-36 BIB001 , EQ-5D , VascuQoL BIB003 BIB014 , and CIVIQ BIB005 ) and pain scale (e.g., 0 (no pain) to 10 (the worst pain)) BIB002 Qualitative nominal (i) Apolipoprotein E gene (ApoE) genotypes: E2/E2, E2/E3, E2/E4, E3/E3, E3/E4, and E4/ E4 BIB013 (ii) SNP (single-nucleotide polymorphism) of IL-6: at position −174 (rs1800795), −572 (rs1800796), −596 (rs1800797), and T15 A (rs13306435) BIB008 Quantitative discrete (i) Number of bacteria in urine or other fluids (ii) Number of contaminated products with different bacteria BIB009 (iii) Glasgow aneurysm score (� age in years + 17 for shock + 7 for myocardial disease + 10 for cerebrovascular disease + 14 for renal disease) BIB017 Quantitative continuous (i) Biomarkers: chitotriosidase BIB023 , neopterin BIB021 , urinary cotinine BIB018 , and urinary cadmium levels BIB018 (ii) Measurements: resistivity index BIB015 , ultrasound thickness BIB019 , and interventricular septal thickness BIB020 BIB022 . e continuous data are reported with one or two decimals (sufficient to assure the accuracy of the result), while the P values are reported with four decimals even if the significance threshold was or not reached . e norms and good practice are not always seen in the scientific literature while the studies are frequently more complex (e.g., investigation of changes in the values of biomarkers with age or comparison of healthy subjects with subjects with a specific disease). One example is given by Koch and Singer BIB004 , which aimed to determine the range of normal values of the plasma B-type natriuretic peptide (BNP) from infancy to adolescence. One hundred ninetyfive healthy subjects, infants, children, and adolescents were evaluated. Even that the values of BNP varied considerably, the results were improper reported as mean (standard deviation) on the investigated subgroups, but correctly compared subgroups using nonparametric tests BIB004 . Taheri et al. compared the serum levels of hepcidin (a low molecular weight protein role in the iron metabolism) and prohepcidin in hemodialysis patients (44 patients) and healthy subjects (44 subjects) . Taheri et al. reported the values of hepcidin and prohepcidin as a mean and standard deviation, suggesting the normal distribution of data, and compared using nonparametric tests, inducing the absence of normal distribution of experimental data . Furthermore, they correlated these two biomarkers while no reason exists for this analysis since one is derived from the other . Zhang et al. determined the reference values for plasma pro-gastrin-releasing peptide (ProGrP) levels in healthy Han Chinese adults. ey tested the distribution of ProGrP, identified that is not normally distributed, and correctly reported the medians, ranges, and 2.5 th , 5 th , 50 th , 95 th , and 97.5 th percentiles on two subgroups by ages. Spearman's correlation coefficient was correctly used to test the relation between ProGrP and age, but the symbol of this correlation coefficient was r (symbol attributed to Pearson's correlation coefficient) instead of ρ. e differences in the ProGrP among groups were accurately tested with the Mann-Whitney test (two groups) and the Kruskal-Wallis test (more than two groups). e authors reported the age-dependent reference interval on this specific population without significant differences between genders . e influence of the toner particles on seven biomarkers (serum C-reactive protein (CRP), IgE, interleukin (IL-4, IL-6, and IL-8), serum interferon-c (IFN-c), and urine 8-hydroxy-2′-deoxyguanosine (8OHdG)) was investigated by Murase et al. BIB010 . ey conducted a prospective cohort study (toner exposed and unexposed) with a five-year follow-up and measured annually the biomarkers. e reference values of the studied biomarkers were correctly reported as median and 27
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> There are few branches of the Theory of Evolution which appear to the mathematical statistician so much in need of exact treatment as those of Regression, Heredity, and Panmixia. Round the notion of panmixia much obscurity has accumulated, owing to the want of precise definition and quantitative measurement. The problems of regression and heredity have been dealt with by Mr. Francis Galton in his epochmaking work on ‘Natural Inheritance,’ but, although he has shown exact methods of dealing, both experimentally and mathematically, with the problems of inheritance, it does not appear that mathematicians have hitherto developed his treatment, or that biologists and medical men have yet fully appreciated that he has really shown how many of the problems which perplex them may receive at any rate a partial answer. A considerable portion of the present memoir will be devoted to the expansion and fuller development of Mr. Galton’s ideas, particularly their application to the problem of bi-parental inheritance . At the same time I shall endeavour to point out how the results apply to some current biological and medical problems. In the first place, we must definitely free our minds, in the present state of our knowledge of the mechanism of inheritance and reproduction, of any hope of reaching a mathematical relation expressing the degree of correlation between individual parent and individual offspring. The causes in any individual case of inheritance are far too complex to admit of exact treatment; and up to the present the classification of the circumstances under which greater or less degrees of correlation between special groups of parents and offspring may be expected has made but little progress. This is largely owing to a certain prevalence of almost metaphysical speculation as to the causes of heredity, which has usurped the place of that careful collection and elaborate experiment by which alone sufficient data might have been accumulated, with a view to ultimately narrowing and specialising the circumstances under which correlation was measured. We must proceed from inheritance in the mass to inheritance in narrower and narrwoer classes, rather than attempt to build up general rules on the observation of individual instances. Shortly, we must proceed by the method of statistics, rather than by the consideration of typical cases. It may seem discouraging to the medical practitioner, with the problem before him of inheritance in a particular family, to be told that nothing but averages, means, and probabilities with regard to large classes can as yet be scientifically dealt with ; but the very nature of the distribution of variation, whether healthy or morhid, seems to indicate that we are dealing with that sphere of indefinitely numerous small causes, which in so many other instances has shown itself only amenable to the calculus of chance, and not to any analysis of the individual instance. On the other hand, the mathematical theory wall be of assistance to the medical man by answering, inter alia, in its discussion of regression the problem as to the average effect upon the offspring of given degrees of morbid variation in the parents. It may enable the physician, in many cases, to state a belief based on a high degree of probability, if it offers no ground for dogma in individual cases. One of the most noteworthy results of Mr. Francis Galton’s researches is his discovery of the mode in which a population actually reproduces itself by regression and fraternal variation. It is with some expansion and fuller mathematical treatment of these ideas that this memoir commences. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> (Subsections not included) Chapter 1: 1.1 Introduction. 1.2 Terminology. 1.3 Early development of radioimmunoassay. 1.4 Basic principles of binding assays. 1.5 Binder dilution curves and standard curves. 1.6 Methods for plotting the standard curve. 1.7 The measurement of K value. Chapter 2: 2.1 The need for purified ligand. 2.2 Availability of pure ligand. 2.3 Dissimilarity between purified ligand and endogenous ligand. 2.4 Standards. 2.5 Storage of materials used in binding assays. Chapter 3: 3.1 Radioactive isotopes. 3.2 Counting of radioactive isotopes. 3.3 Choice of counter. 3.4 Some practical aspects of isotope counting. 3.5 Essential characteristics of a tracer. 3.6 Preparation of tracers. 3.7 Iodinated tracers. 3.8 Variations on the use of radiolabelled tracers. Chapter 4: 4.1 Particle labels. 4.2 Enzyme labels (enzymoimmunoassay, EIA). 4.3 Fluorescent labels (fluoroimmunoassay, FIA). 4.4 Luminescent labels. 4.5 Advantages and disadvantages of non-isotopic labels in immunoassays. 4.6 Conclusions: the place of non-isotopic binding assays. Chapter 5: 5.1 Antibodies and the immune response. 5.2 Preparation of Antisera for use in RIA. 5.3 'Monoclonal' antibodies. 5.4 Cell receptors. 5.5 Circulating binding proteins. 5.6 Assays for the detection of endogenous antibodies, circulating binding proteins and receptors. Chapter 6: 6.1 Efficiency of separation procedures. 6.2 Practicality of separation procedures. 6.3 Methods of separation of bound and free ligand. 6.4 Immunometric techniques. Chapter 7: 7.1 General aspects of extraction procedures. 7.2 Extraction using particulate adsorbents. 7.3 Extraction with immunoadsorbents. 7.4 Extraction with organic solvents. 7.5 Dissociation procedures. 7.6 Measurement of 'free' hormone or drug. 7.7 Conclusions - the elimination of extraction procedures. 7.8 Sample collection and transport for immunoassay. Chapter 8: 8.1 Calculation of results by manual extrapolation. 8.2 Data transformation of the standard curve. 8.3 The logit transformation. 8.4 Identification of outliers. 8.5 Estimation of confidence limits to the result of an unknown. 8.6 Computer calculation of results. 8.7 Calculation of results of labelled antibody assays. 8.8 Presentation of results. Chapter 9: 9.1 Definition of sensitivity. 9.2 Methods of increasing the sensitivity of a labelled antigen immunoassay. 9.3 Methods of increasing the sensitivity of a labelled antibody assay (immunometric assays). 9.4 Methods of decreasing the sensitivity of an assay. 9.5 The low-dose hook effect. 9.6 Targeting of binding assays - the importance of ranges. 9.7 Optimisation of an assay by theoretical analysis. 9.8 Conclusions. Chapter 10: 10.1 Definition of specificity. 10.2 Specific non-specificity. 10.3 Non-specific non-specificity. Chapter 11: 11.1 Definitions. 11.2 Factors affecting precision. 11.3 Quality control to monitor the precision of a binding assay. 11.4 Practical use of a quality-control scheme. 11.5 External quality control schemes. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> The coefficient of variation is often used as a guide of the repeatability of measurements in clinical trials and other medical work. When possible, one makes repeated measurements on a set of individuals to calculate the relative variability of the test with the understanding that a reliable clinical test should give similar results when repeated on the same patient. There are times, however, when repeated measurements on the same patient are not possible. Under these circumstances, to combine results from different clinical trials or test sites, it is necessary to compare the coefficients of variation of several clinical trials. Using the work of Miller, we develop a general statistic for testing the hypothesis that the coefficients of variation are the same for k populations, with unequal sample sizes. This statistic is invariant under the choice of the order of the populations, and is asymptotically χ2. We provide an example using data from Yang and HayGlass. We compare the size and the power of the test to that of Bennett, Doornbos and Dijkstra and a statistic based on Hedges and Olkin. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> An approximate confidence interval is proposed for a robust measure of relative dispersion-the coefficient of quartile variation. The proposed method provides an alternative to interval estimates for other measures of relative dispersion. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> Inference for the coefficient of variation in normal distributions is considered. An explicit estimator of a coefficient of variation that is shared by several populations with normal distributions is proposed. Methods for making confidence intervals and statistical tests, based on McKay's approximation for the coefficient of variation, are provided. Exact expressions for the first two moments of McKay's approximation are given. An approximate F-test for equality of a coefficient of variation that is shared by several normal distributions and a coefficient of variation that is shared by several other normal distributions is introduced. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> SummaryTo determine the laboratory reproducibility of urine N-telopeptide and serum bone-specific alkaline phosphatase measurements, we sent identical specimens to six US commercial labs over an 8-month period. Longitudinal and within-run laboratory reproducibility varied substantially. Efforts to improve the reproducibility of these tests are needed.IntroductionWe assessed the laboratory reproducibility of urine N-telopeptide (NTX) and serum bone-specific alkaline phosphatase (BAP).MethodsSerum and urine were collected from five postmenopausal women, pooled, divided into identical aliquots, and frozen. To evaluate longitudinal reproducibility, identical specimens were sent to six US commercial labs on five dates over an 8-month period. To evaluate within-run reproducibility, on the fifth date, each lab was sent five identical specimens. Labs were unaware of the investigation.ResultsLongitudinal coefficients of variation (CVs) ranged from 5.4% to 37.6% for NTX and from 3.1% to 23.6% for BAP. Within-run CVs ranged from 1.5% to 17.2% for NTX. Compared to the Osteomark NTX assay, the Vitros ECi NTX assay had significantly higher longitudinal reproducibility (mean CV 7.2% vs. 30.3%, p < 0.0005) and within-run reproducibility (mean CV 3.5% vs. 12.7%, p < 0.0005).ConclusionsReproducibility of urine NTX and serum BAP varies substantially across US labs. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> Abstract The need to measure and benchmark university governance practices at institutional level has been growing, yet there is a lack of a comprehensive, weighted indicator system to facilitate the process. The paper discusses the development of university governance indicators and their weighting system using a three-round Delphi method. Discussions, a questionnaire, and interviews were used in Round 1 to 3, respectively, to collect experts’ opinions to construct the indicator list and indicator weights, and to shed light on the divergence of expert judgements on some aspects. Non-parametric statistical techniques were applied to analyse the survey data. Ninety-one indicators grouped in five dimensions of university governance, namely Management and Direction, Participation, Accountability, Autonomy and Transparency, were proposed and rated in terms of their importance. The preliminary results show relatively high levels of importance for all of the proposed indicators, thus none was excluded. The weighting of the indicators and factors vary remarkably. Among the five dimensions, Participation is found to be the least important; experts’ consensus is found to be low in Participation and Transparency. The study results also provide important implications to researchers and practitioners in university governance. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> The problem of testing the equality of coefficients of variation of independent normal populations is considered. For comparing two coefficients, we consider the signed-likelihood ratio test (SLRT) and propose a modified version of the SLRT, and a generalized test. Monte Carlo studies on the type I error rates of the tests indicate that the modified SLRT and the generalized test work satisfactorily even for very small samples, and they are comparable in terms of power. Generalized confidence intervals for the ratio of (or difference between) two coefficients of variation are also developed. A modified LRT for testing the equality of several coefficients of variation is also proposed and compared with an asymptotic test and a simulation-based small sample test. The proposed modified LRTs seem to be very satisfactory even for samples of size three. The methods are illustrated using two examples. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Variation Analysis. Coefficient of variation (CV) <s> ABSTRACTThis article examines confidence intervals for the single coefficient of variation and the difference of coefficients of variation in the two-parameter exponential distributions, using the method of variance of estimates recovery (MOVER), the generalized confidence interval (GCI), and the asymptotic confidence interval (ACI). In simulation, the results indicate that coverage probabilities of the GCI maintain the nominal level in general. The MOVER performs well in terms of coverage probability when data only consist of positive values, but it has wider expected length. The coverage probabilities of the ACI satisfy the target for large sample sizes. We also illustrate our confidence intervals using a real-world example in the area of medical science. <s> BIB009
, also known as relative standard deviation (RSD), is a standardized measure of dispersion used to express the precision (intra-assay (the same sample assayed in duplicate) CV < 10% is considered acceptable; interassay (comparison of results across assay runs) CV < 15% is deemed to be acceptable) of an assay BIB002 . e coefficient of variation was introduced by Karl Pearson in 1896 BIB001 and could also be used to test the reliability of a method (the smaller the CV values, the higher the reliability is) , to compare methods (the smallest CV belongs to the better method) or variables expressed with different units BIB009 . e CV is defined as the ratio of the standard deviation to the mean expressed as percentage [116] and is correctly calculated on quantitative data measured on the ratio scale . e coefficient of quartile variation/dispersion (CQV/CQD) was introduced as a preferred measure of dispersion when data did not follow the normal distribution and was defined based on the third and first quartile as (Q3 -Q1)/(Q3 + Q1) * 100 BIB004 . In a survey analysis, the CQV is used as a measure of convergence in experts' opinions BIB007 . e confidence interval associated with CV is expected to be reported for providing the readers with sufficient information for a correct interpretation of the reported results, and several online implementations are available (Table 5 ). e inference on CVs can be made using specific statistical tests according to the distribution of data. For normal distributions, tests are available to compare two BIB005 or more than two CVs (Feltz and Miller test BIB003 or Krishnamoorthy and Lee test BIB008 , the last one also implemented in R ). Reporting the CVs with associated 95% confidence intervals allows a proper interpretation of its point estimator value (CV). Schafer et al. BIB006 investigated laboratory reproducibility of urine N-telopeptide (NTX) and serum bone-specific alkaline phosphatase (BAP) measurements with six labs over eight months and correctly reported the CVs with associated 95% confidence intervals. Furthermore, they also compared the CVs between two assays and between labs and highlighted the need for improvements in the analytical precision of both NTX and BAP biomarkers BIB006 . ey concluded with the importance of the availability of laboratory performance reports to clinicians and institutions along with the need for proficiency testing and standardized guidelines to improve market reproducibility BIB006 .
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Computational and Mathematical Methods in Medicine <s> We have derived the mathematical relationship between the coefficient of variation associated with repeated measurements from quantitative assays and the expected fraction of pairs of those measurements that differ by at least some given factor, i.e., the expected frequency of disparate results that are due to assay variability rather than true differences. Knowledge of this frequency helps determine what magnitudes of differences can be expected by chance alone when the particular coefficient of variation is in effect. This frequency is an operational index of variability in the sense that it indicates the probability of observing a particular disparity between two measurements under the assumption that they measure the same quantity. Thus the frequency or probability becomes the basis for assessing if an assay is sufficiently precise. This assessment also provides a standard for determining if two assay results for the same subject, separated by an intervention such as vaccination or infection, differ by more than expected from the variation of the assay, thus indicating an intervention effect. Data from an international collaborative study are used to illustrate the application of this proposed interpretation of the coefficient of variation, and they also provide support for the assumptions used in the mathematical derivation. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Computational and Mathematical Methods in Medicine <s> BACKGROUND ::: It is vital for clinicians to understand and interpret correctly medical statistics as used in clinical studies. In this review, we address current issues and focus on delivering a simple, yet comprehensive, explanation of common research methodology involving receiver operating characteristic (ROC) curves. ROC curves are used most commonly in medicine as a means of evaluating diagnostic tests. ::: ::: ::: METHODS ::: Sample data from a plasma test for the diagnosis of colorectal cancer were used to generate a prediction model. These are actual, unpublished data that have been used to describe the calculation of sensitivity, specificity, positive predictive and negative predictive values, and accuracy. The ROC curves were generated to determine the accuracy of this plasma test. These curves are generated by plotting the sensitivity (true-positive rate) on the y axis and 1 - specificity (false-positive rate) on the x axis. ::: ::: ::: RESULTS ::: Curves that approach closest to the coordinate (x = 0, y = 1) are more highly predictive, whereas ROC curves that lie close to the line of equality indicate that the result is no better than that obtained by chance. The optimum sensitivity and specificity can be determined from the graph as the point where the minimum distance line crosses the ROC curve. This point corresponds to the Youden index (J), a function of sensitivity and specificity used commonly to rate diagnostic tests. The area under the curve is used to quantify the overall ability of a test to discriminate between 2 outcomes. ::: ::: ::: CONCLUSION ::: By following these simple guidelines, interpretation of ROC curves will be less difficult and they can then be interpreted more reliably when writing, reviewing, or analyzing scientific papers. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Computational and Mathematical Methods in Medicine <s> ObjectiveHundreds of scientific publications are produced annually that involve the measurement of cortisol in saliva. Intra- and inter-laboratory variation in salivary cortisol results has the potential to contribute to cross-study inconsistencies in findings, and the perception that salivary cortisol results are unreliable. This study rigorously estimates sources of measurement variability in the assay of salivary cortisol within and between established international academic-based laboratories that specialize in saliva analyses. One hundred young adults (Mean age: 23.10 years; 62 females) donated 2 mL of whole saliva by passive drool. Each sample was split into multiple- 100 µL aliquots and immediately frozen. One aliquot of each of the 100 participants’ saliva was transported to academic laboratories (N = 9) in the United States, Canada, UK, and Germany and assayed for cortisol by the same commercially available immunoassay.Results1.76% of the variance in salivary cortisol levels was attributable to differences between duplicate assays of the same sample within laboratories, 7.93% of the variance was associated with differences between laboratories, and 90.31% to differences between samples. In established-qualified laboratories, measurement error of salivary cortisol is minimal, and inter-laboratory differences in measurement are unlikely to have a major influence on the determined values. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Computational and Mathematical Methods in Medicine <s> IntroductionThe choice of criteria in determining optimal cut-off value is a matter of concern in quantitative diagnostic tests. Several indexes such as Youden’s index, Euclidean index, product of ... <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Computational and Mathematical Methods in Medicine <s> AIMS ::: The purpose of this study was to determine the impact of strain elastography (SE) on the Breast Imaging Reporting Data System (BI-RADS) classification depending on invasive lobular carcinoma (ILC) lesion size. ::: ::: ::: MATERIALS AND METHODS ::: We performed a retrospective analysis on a sample of 152 female subjects examined between January 2010 - January 2017. SE was performed on all patients and ILC was subsequently diagnosed by surgical or ultrasound-guided biopsy. BI-RADS 1, 2, 6 and Tsukuba BGR cases were omitted. BI-RADS scores were recorded before and after the use of SE. The differences between scores were compared to the ILC tumor size using nonparametric tests and logistic binary regression. We controlled for age, focality, clinical assessment, heredo-collateral antecedents, B-mode and Doppler ultrasound examination. An ROC curve was used to identify the optimal cut-off point for size in relationship to BI-RADS classificationdifference using Youden's index. ::: ::: ::: RESULTS ::: The histological subtypes of ILC lesions (n=180) included in the sample were luminal A (70%, n=126), luminal B (27.78%, n=50), triple negative (1.67%, n=3) and HER2+ (0.56%, n=1). The BI-RADS classification was higher when SE was performed (Z=- 6.629, p<0.000). The ROC curve identified a cut-off point of 13 mm for size in relationship to BI-RADS classification difference (J=0.670, p<0.000). Small ILC tumors were 17.92% more likely to influence BI-RADS classification (p<0.000). ::: ::: ::: CONCLUSIONS ::: SE offers enhanced BI-RADS classification in small ILC tumors (<13 mm). Sonoelastography brings added value to B-mode breast ultrasound as an adjacent to mammography in breast cancer screening. <s> BIB005
However, good practice in reporting CVs is not always observed. Inter-and intra-assay CVs within laboratories reported by Calvi et al. BIB003 on measurements of cortisol in saliva are reported as point estimators, and neither confidence intervals nor statistical test is provided. Reed et al. BIB001 reported the variability of measurements (thirty-three laboratories with fifteen repeated measurements on each lab) of human serum antibodies against Bordetella pertussis antigens by ELISA method using just the CVs (no associated 95% confidence intervals) in relation with the expected fraction of pairs of those measurements that differ by at least a given factor (k). , minimum), the weighted number needed to misdiagnose (maximum, considered the pretest probability and the cost of a misdiagnosis) , and Euclidean index BIB004 . e metrics used to identify the best cutoff value are a matter of methodology and are not expected to be reported as a result (reporting a J index of 0.670 for discrimination in small invasive lobular carcinoma BIB005 is not informative because the same J could be obtained for different values of Se and Sp: 0.97/0.77, 0.7/0.97, 0.83/0.84, etc.). Youden's index has been reported as the best metric in choosing the cutoff value BIB004 but is not able to differentiate between differences in sensitivity and specificity BIB002 . Furthermore, Youden's index can be used as an indicator of quality when reported with associated 95% confidence intervals, and a poor quality being associated with the presence of 0.5 is the confidence interval BIB002 .
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> A previously described coefficient of agreement for nominal scales, kappa, treats all disagreements equally. A generalization to weighted kappa (Kw) is presented. The Kw provides for the incorpation of ratio-scaled degrees of disagreement (or agreement) to each of the cells of the k * k table of joint nominal scale assignments such that disagreements of varying gravity (or agreements of varying degree) are weighted accordingly. Although providing for partial credit, Kw is fully chance corrected. Its sampling characteristics and procedures for hypothesis testing and setting confidence limits are given. Under certain conditions, Kw equals product-moment r. The use of unequal weights for symmetrical cells makes Kw suitable as a measure of validity. (PsycINFO Database Record (c) 2006 APA, all rights reserved) <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> In clinical measurement comparison of a new measurement technique with an established one is often needed to see whether they agree sufficiently for the new to replace the old. Such investigations are often analysed inappropriately, notably by using correlation coefficients. The use of correlation is misleading. An alternative approach, based on graphical techniques and simple calculations, is described, together with the relation between this analysis and the assessment of repeatability. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> Although intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement, psychometrics, and behavioral genetics, procedures available for forming inferences about ICCs are not widely known. Following a review of the distinction between various forms of the ICC, this article presents procedures available for calculating confidence intervals and conducting tests on ICCs developed using data from one-way and two-way random and mixed-effect analysis of variance models. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> Agreement between two methods of clinical measurement can be quantified using the differences between observations made using the two methods on the same subjects. The 95% limits of agreement, estimated by mean difference +/- 1.96 standard deviation of the differences, provide an interval within which 95% of differences between measurements by the two methods are expected to lie. We describe how graphical methods can be used to investigate the assumptions of the method and we also give confidence intervals. We extend the basic approach to data where there is a relationship between difference and magnitude, both with a simple logarithmic transformation approach and a new, more general, regression approach. We discuss the importance of the repeatability of each method separately and compare an estimate of this to the limits of agreement. We extend the limits of agreement approach to data with repeated measurements, proposing new estimates for equal numbers of replicates by each method on each subject, for unequal numbers of replicates, and for replicated data collected in pairs, where the underlying value of the quantity being measured is changing. Finally, we describe a nonparametric approach to comparing methods. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> Objective:Brain microbleeds on gradient-recalled echo (GRE) T2*-weighted MRI may be a useful biomarker for bleeding-prone small vessel diseases, with potential relevance for diagnosis, prognosis (especially for antithrombotic-related bleeding risk), and understanding mechanisms of symptoms, including cognitive impairment. To address these questions, it is necessary to reliably measure their presence and distribution in the brain. We designed and systematically validated the Microbleed Anatomical Rating Scale (MARS). We measured intrarater and interrater agreement for presence, number, and anatomical distribution of microbleeds using MARS across different MRI sequences and levels of observer experience. Methods:We studied a population of 301 unselected consecutive patients admitted to our stroke unit using 2 GRE T2*-weighted MRI sequences (echo time [TE] 40 and 26 ms). Two independent raters with different MRI rating expertise identified, counted, and anatomically categorized microbleeds. Results:At TE = 40 ms, agreement for microbleed presence in any brain location was good to very good (intrarater &kgr; = 0.85 [95% confidence interval (CI) 0.77–0.93]; interrater &kgr; = 0.68 [95% CI 0.58–0.78]). Good to very good agreement was reached for the presence of microbleeds in each anatomical region and in individual cerebral lobes. Intrarater and interrater reliability for the number of microbleeds was excellent (intraclass correlation coefficient [ICC] = 0.98 [95% CI 0.97–0.99] and ICC = 0.93 [0.91–0.94]). Very good interrater reliability was obtained at TE = 26 ms (&kgr; = 0.87 [95% CI 0.61–1]) for definite microbleeds in any location. Conclusion:The Microbleed Anatomical Rating Scale has good intrarater and interrater reliability for the presence of definite microbleeds in all brain locations when applied to different MRI sequences and levels of observer experience. GLOSSARYBOMBS = Brain Observer Microbleed Scale; CAA = cerebral amyloid angiopathy; CI = confidence interval; DPWM = deep and periventricular white matter; FA = flip angle; FLAIR = fluid-attenuated inversion recovery; FOV = field of view; GRE = gradient-recalled echo; ICC = intraclass correlation coefficient; MARS = Microbleed Anatomical Rating Scale; NEX = number of excitations; NHNN = National Hospital for Neurology and Neurosurgery; TE = echo time; TR = repetition time. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> BACKGROUND/AIMS ::: At present, automated analysis of high-resolution manometry (HRM) provides details of upper esophageal sphincter (UES) relaxation parameters. The aim of this study was to assess the accuracy of automatic analysis of UES relaxation parameters. ::: ::: ::: MATERIALS AND METHODS ::: One hundred and fifty three subjects (78 males, mean age 68.6 years, range 26-97) underwent HRM. UES relaxation parameters were interpreted twice, once visually (V) by two experts and once automatically (AS) using the ManoView ESO analysis software. Agreement between the two analysis methods was assessed using Bland-Altman plots and Lin's concordance correlation coefficient (CCC). ::: ::: ::: RESULTS ::: The agreement between V and AS analyses of basal UES pressure (CCC 0.996; 95% confidence interval (CI) 0.994-0.997) and residual UES pressure (CCC 0.918; 95% CI 0.895-0.936) was good to excellent. Agreement for time to UES relaxation nadir (CCC 0.208; 95% CI 0.068-0.339) and UES relaxation duration (CCC 0.286; 95% CI 0.148-0.413) between V and AS analyses was poor. There was moderate agreement for recovery time of UES relaxation (CCC 0.522; 95% CI 0.397-0.627) and peak pharyngeal pressure (CCC 0.695; 95% CI 0.605-0.767) between V and AS analysis. ::: ::: ::: CONCLUSION ::: AS analysis was unreliable, especially regarding the time variables of UES relaxation. Due to the difference in the clinical interpretation of pharyngoesophageal dysfunction between V and AS analysis, the use of visual analysis is justified. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> In a contemporary clinical laboratory it is very common to have to assess the agreement between two quantitative methods of measurement. The correct statistical approach to assess this degree of agreement is not obvious. Correlation and regression studies are frequently proposed. However, correlation studies the relationship between one variable and another, not the differences, and it is not recommended as a method for assessing the comparability between methods. ::: In 1983 Altman and Bland (B&A) proposed an alternative analysis, based on the quantification of the agreement between two quantitative measurements by studying the mean difference and constructing limits of agreement. ::: The B&A plot analysis is a simple way to evaluate a bias between the mean differences, and to estimate an agreement interval, within which 95% of the differences of the second method, compared to the first one, fall. Data can be analyzed both as unit differences plot and as percentage differences plot. ::: The B&A plot method only defines the intervals of agreements, it does not say whether those limits are acceptable or not. Acceptable limits must be defined a priori, based on clinical necessity, biological considerations or other goals. ::: The aim of this article is to provide guidance on the use and interpretation of Bland Altman analysis in method comparison studies. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> OBJECTIVE ::: Intraclass correlation coefficient (ICC) is a widely used reliability index in test-retest, intrarater, and interrater reliability analyses. This article introduces the basic concept of ICC in the content of reliability analysis. ::: ::: ::: DISCUSSION FOR RESEARCHERS ::: There are 10 forms of ICCs. Because each form involves distinct assumptions in their calculation and will lead to different interpretations, researchers should explicitly specify the ICC form they used in their calculation. A thorough review of the research design is needed in selecting the appropriate form of ICC to evaluate reliability. The best practice of reporting ICC should include software information, "model," "type," and "definition" selections. ::: ::: ::: DISCUSSION FOR READERS ::: When coming across an article that includes ICC, readers should first check whether information about the ICC form has been reported and if an appropriate ICC form was used. Based on the 95% confident interval of the ICC estimate, values less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and greater than 0.90 are indicative of poor, moderate, good, and excellent reliability, respectively. ::: ::: ::: CONCLUSION ::: This article provides a practical guideline for clinical researchers to choose the correct form of ICC and suggests the best practice of reporting ICC parameters in scientific publications. This article also gives readers an appreciation for what to look for when coming across ICC while reading an article. <s> BIB009 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> OBJECTIVE ::: To synthesize the literature and perform a meta-analysis for both the interrater and intrarater reliability of the FMS™. ::: ::: ::: METHODS ::: Academic Search Complete, CINAHL, Medline and SportsDiscus databases were systematically searched from inception to March 2015. Studies were included if the primary purpose was to determine the interrater or intrarater reliability of the FMS™, assessed and scored all 7-items using the standard scoring criteria, provided a composite score and employed intraclass correlation coefficients (ICCs). Studies were excluded if reliability was not the primary aim, participants were injured at data collection, or a modified FMS™ or scoring system was utilized. ::: ::: ::: RESULTS ::: Seven papers were included; 6 assessing interrater and 6 assessing intrarater reliability. There was moderate evidence in good interrater reliability with a summary ICC of 0.843 (95% CI = 0.640, 0.936; Q7 = 84.915, p < 0.0001). There was moderate evidence in good intrarater reliability with a summary ICC of 0.869 (95% CI = 0.785, 0.921; Q12 = 60.763, p < 0.0001). ::: ::: ::: CONCLUSION ::: There was moderate evidence for both forms of reliability. The sensitivity assessments revealed this interpretation is stable and not influenced by any one study. Overall, the FMS™ is a reliable tool for clinical practice. <s> BIB010 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Agreement Analysis. <s> Degenerated discs have shorter T2-relaxation time and lower MR signal. The location of the signal-intensity-weighted-centroid reflects the water distribution within a region-of-interest (ROI). This study compared the reliability of the location of the signal-intensity-weighted-centroid to mean signal intensity and area measurements. L4-L5 and L5-S1 discs were measured on 43 mid-sagittal T2-weighted 3T MRI images in adults with back pain. One rater analysed images twice and another once, blinded to measurements. Discs were semi-automatically segmented into a whole disc, nucleus, anterior and posterior annulus. The coordinates of the signal-intensity-weighted-centroid for all regions demonstrated excellent intraclass-correlation-coefficients for intra- (0.99-1.00) and inter-rater reliability (0.97-1.00). The standard error of measurement for the Y-coordinates of the signal-intensity-weighted-centroid for all ROIs were 0 at both levels and 0 to 2.7 mm for X-coordinates. The mean signal intensity and area for the whole disc and nucleus presented excellent intra-rater reliability with intraclass-correlation-coefficients from 0.93 to 1.00, and 0.92 to 1.00 for inter-rater reliability. The mean signal intensity and area had lower reliability for annulus ROIs, with intra-rater intraclass-correlation-coefficient from 0.5 to 0.76 and inter-rater from 0.33 to 0.58. The location of the signal-intensity-weighted-centroid is a reliable biomarker for investigating the effects of disc interventions. <s> BIB011
Percentage agreement (p o ), the number of agreements divided into the number of cases, is the easiest agreement coefficient that could be calculated but may be misleading. Several agreement coefficients that adjust the proportional agreement by the agreement expected by chance were introduced: BIB004 )) e Cohen's kappa coefficient has three assumptions: (i) the units are independent, (ii) the categories on the nominal scale are independent and mutually exclusive, and (iii) the readers/raters are independent . Cohen's kappa coefficient takes a value between −1 (perfect disagreement) and 1 (complete agreement). e empirical rules used to interpret the Cohen's kappa coefficient BIB006 Weighted kappa is used to discriminate between different readings on ordinal diagnostic test results (different grade of disagreement exists between good and excellent compared to poor and excellent). Different weights reflecting the importance of agreement and the weights (linear, proportional to the number of categories apart or quadratic, proportional to the square of the number of classes apart) must be established by the researcher BIB001 . Intra-and interclass correlation coefficients (ICCs) are used as a measure of reliability of measurements and had their utility in the evaluation of a diagnostic test. Interrater reliability (defined as two or more raters who measure the same group of individuals), test-retest reliability (defined as the variation in measurements by the same instrument on the same subject by the same conditions), and intrarater reliability (defined as variation of data measured by one rater across two or more trials) are common used BIB009 . McGraw and Wong BIB003 defined in 1996 the ten forms of ICC based on the model (1-way random effects, 2-way random effects, or 2-way fixed effects), the number of rates/measurements (single rater/measurement or the mean of k raters/ measurements), and hypothesis (consistency or absolute agreement). McGraw and Wong also discuss how to correctly select the correct ICC and recommend to report the ICC values along with their 95% CI BIB003 . Lin's concordance correlation coefficient (ρ c ) measures the concordance between two observations, one measurement as the gold standard. e ranges of values of Lin's concordance correlation coefficient are the same as for Cohen's kappa coefficient. e interpretation of ρ c takes into account the scale of measurements, with more strictness for continuous measurements (Table 6) . For intraand interobserver agreement, Martins and Nastri introduced the metric called limits of agreement (LoA) and proposed a cutoff < 5% for very good reliability/agreement. Reporting the ICC and/or CCC along with associated 95% confidence intervals is good practice for agreement Computational and Mathematical Methods in Medicine analysis. e results are reported in both primary (such as reliability analysis of the Microbleed Anatomical Rating Scale in the evaluation of microbleeds BIB005 , automatic analysis of relaxation parameters of the upper esophageal sphincter BIB007 , and the use of signal intensity weighted centroid in magnetic resonance images of patients with discs degeneration BIB011 ) and secondary research studies (systematic review and/or meta-analysis: evaluation of the functional movement screen BIB010 , evaluation of the Manchester triage scale on an emergency department , reliability of the specific physical examination tests for the diagnosis of shoulder pathologies , etc.). Altman and Bland criticized the used of correlation (this is a measure of association, and it is not correct to infer that the two methods can be used interchangeably), linear regression analysis (the method has several assumptions that need to be checked before application, and the assessment of residuals is mandatory for a proper interpretation), and the differences between means as comparison methods aimed to measure the same quantity BIB002 BIB008 . ey proposed a graphical method called the B&A plot to analyze the agreement between two quantitative measurements by studying the mean difference and constructing limits of agreement BIB004 . Whenever a gold standard method exists, the difference between the two methods is plotted against the reference values . Besides the fact that the B&A plot provides the limits of agreements, no information regarding the acceptability of the boundaries is supplied, and the acceptable limits must be a priori defined based on clinical significance BIB008 .
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> If the performance of a diagnostic imaging system is to be evaluated objectively and meaningfully, one must compare radiologists' image-based diagnoses with actual states of disease and health in a way that distinguishes between the inherent diagnostic capacity of the radiologists' interpretations of the images, and any tendencies to "under-read" or "over-read". ROC methodology provides the only known basis for distinguishing between these two aspects of diagnostic performance. After identifying the fundamental issues that motivate ROC analysis, this article develops ROC concepts in an intuitive way. The requirements of a valid ROC study and practical techniques for ROC data collection and data analysis are sketched briefly. A survey of the radiologic literature indicates the broad variety of evaluation studies in which ROC analysis has been employed. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> The clinical performance of a laboratory test can be described in terms of diagnostic accuracy, or the ability to correctly classify subjects into clinically relevant subgroups. Diagnostic accuracy refers to the quality of the information provided by the classification device and should be distinguished from the usefulness, or actual practical value, of the information. Receiver-operating characteristic (ROC) plots provide a pure index of accuracy by demonstrating the limits of a test's ability to discriminate between alternative states of health over the complete spectrum of operating conditions. Furthermore, ROC plots occupy a central or unifying position in the process of assessing and using diagnostic tools. Once the plot is generated, a user can readily go on to many other activities such as performing quantitative ROC analysis and comparisons of tests, using likelihood ratio to revise the probability of disease in individual subjects, selecting decision thresholds, using logistic-regression analysis, using discriminant-function analysis, or incorporating the tool into a clinical strategy by using decision analysis. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> Receiver operating characteristic (ROC) curves are frequently used in biomedical informatics research to evaluate classification and prediction models for decision support, diagnosis, and prognosis. ROC analysis investigates the accuracy of a model's ability to separate positive from negative cases (such as predicting the presence or absence of disease), and the results are independent of the prevalence of positive cases in the study population. It is especially useful in evaluating predictive models or other tests that produce output values over a continuous range, since it captures the trade-off between sensitivity and specificity over that range. There are many ways to conduct an ROC analysis. The best approach depends on the experiment; an inappropriate approach can easily lead to incorrect conclusions. In this article, we review the basic concepts of ROC analysis, illustrate their use with sample calculations, make recommendations drawn from the literature, and list readily available software. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> Laboratory tests provide the most definitive information for diagnosing and managing many diseases, and most patients look to laboratory tests as the most important information from a medical visit. Most patients who have rheumatoid arthritis (RA) have a positive test for rheumatoid factor and anticyclic citrullinated peptide (anti-CCP) antibodies, as well as an elevated erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP). More than 30% 40% of patients with RA, however, have negative tests for rheumatoid factor or anti-CCP antibodies or a normal ESR or CRP. More than 30% of patients with RA, however, have negative tests for rheumatoid factor or anti-CCP antibodies, and 40% have a normal ESR or CRP. These observations indicate that, although they can be helpful to monitor certain patients, laboratory measures cannot serve as a gold standard for diagnosis and management in all individual patients with RA or any rheumatic disease. Physicians and patients would benefit from an improved understanding of the limitations of laboratory tests in diagnosis and management of patients with RA. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> Abstract Background Several studies and systematic reviews have reported results that indicate that sensitivity and specificity may vary with prevalence. Study design and setting We identify and explore mechanisms that may be responsible for sensitivity and specificity varying with prevalence and illustrate them with examples from the literature. Results Clinical and artefactual variability may be responsible for changes in prevalence and accompanying changes in sensitivity and specificity. Clinical variability refers to differences in the clinical situation that may cause sensitivity and specificity to vary with prevalence. For example, a patient population with a higher disease prevalence may include more severely diseased patients, therefore, the test performs better in this population. Artefactual variability refers to effects on prevalence and accuracy associated with study design, for example, the verification of index test results by a reference standard. Changes in prevalence influence the extent of overestimation due to imperfect reference standard classification. Conclusions Sensitivity and specificity may vary in different clinical populations, and prevalence is a marker for such differences. Clinicians are advised to base their decisions on studies that most closely match their own clinical situation, using prevalence to guide the detection of differences in study population or study design. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> In 1996, shortly after the founding of The Cochrane Collaboration, leading figures in test evaluation research established a Methods Group to focus on the relatively new and rapidly evolving methods for the systematic review of studies of diagnostic tests. Seven years later, the Collaboration decided it was time to develop a publication format and methodology for Diagnostic Test Accuracy (DTA) reviews, as well as the software needed to implement these reviews in The Cochrane Library. A meeting hosted by the German Cochrane Centre in 2004 brought together key methodologists in the area, many of whom became closely involved in the subsequent development of the methodological framework for DTA reviews. DTA reviews first appeared in The Cochrane Library in 2008 and are now an integral part of the work of the Collaboration. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> In 1975, Fagan published a nomogram to help practitioners determine, without the use of a calculator or computer, the probability of a patient truly having a condition of interest given a particular test result. Nomograms are very useful for bedside interpretations of test results, as no test is perfect. However, the practicality of Fagan9s nomogram is limited by its use of the likelihood ratio (LR), a parameter not commonly reported in the evaluation studies of diagnostic tests. The LR reflects the direction and strength of evidence provided by a test result and can be computed from the conventional diagnostic sensitivity (DSe) and specificity (DSp) of the test. This initial computation is absent in Fagan9s nomogram, making it impractical for routine use. We have seamlessly integrated the initial step to compute the LR and the resulting two-step nomogram allows the user to quickly interpret the outcome of a test. With the addition of the DSe and DSp, the nomogram, for the purposes of interpreting a dichotomous test result, is now complete. This tool is more accessible and flexible than the original, which will facilitate its use in routine evidence-based practice. The nomogram can be downloaded at: www.adelaide.edu.au/vetsci/research/pub_pop/2step-nomogram/. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Accuracy Analysis. <s> Evaluation of diagnostic performance is a necessary component of new developments in many fields including medical diagnostics and decision making. The methodology for statistical analysis of diagnostic performance continues to develop, offering new analytical tools for conventional inferences and solutions for novel and increasingly more practically relevant questions. ::: ::: In this paper we focus on the partial area under the Receiver Operating Characteristic (ROC) curve, or pAUC. This summary index is considered to be more practically relevant than the area under the entire ROC curve (AUC), but because of several perceived limitations, it is not used as often. In order to improve interpretation, results for pAUC analysis are frequently reported using a rescaled index such as the standardized partial AUC proposed by McClish (1989). ::: ::: We derive two important properties of the relationship between the “standardized” pAUC and the defined range of interest, which could facilitate a wider and more appropriate use of this important summary index. First, we mathematically prove that the “standardized” pAUC increases with increasing range of interest for practically common ROC curves. Second, using comprehensive numerical investigations we demonstrate that, contrary to common belief, the uncertainty about the estimated standardized pAUC can either decrease or increase with an increasing range of interest. ::: ::: Our results indicate that the partial AUC could frequently offer advantages in terms of statistical uncertainty of the estimation. In addition, selection of a wider range of interest will likely lead to an increased estimate even for standardized pAUC. <s> BIB008
e accuracy of a diagnostic test is related to the extent that the test gives the right answer, and the evaluations are done relative to the best available test (also known as gold standard test or reference test and hypothetical ideal test with sensitivity (Se) � 100% and specificity (Sp) � 100%) able to reveal the right answer. Microscopic examinations are considered the gold standard in the diagnosis process but could not be applied to any disease (e.g., stable coronary artery disease , rheumatologic diseases BIB004 , psychiatric disorders , and rare diseases with not yet fully developed histological assessment [155] ). e factors that could affect the accuracy of the diagnostic test can be summarized as follows BIB005 BIB006 : sampling bias, imperfect gold standard test, artefactual variability (e.g., changes in prevalence due to inappropriate design) or clinical variability (e.g., patient spectrum and "gold-standard" threshold), subgroups differences, or reader expectations. Several metrics calculated based on the 2 × 2 contingency table are frequently used to assess the accuracy of a diagnostic test. A gold standard or reference test is used to classify the subject either in the group with the disease or in the group without the disease of interest. Whatever the type of data for the diagnostic test is, a 2 × 2 contingency table can be created and used to compute the accuracy metrics. e generic structure of a 2 × 2 contingency table is presented in Table 7 , and if the diagnostic test is with high accuracy, a significant association with the reference test is observed (significant Chi-square test or equivalent (for details, see )). Several standard indicators and three additional metrics useful in the assessment of the accuracy of a diagnostic test are briefly presented in Tables 8 and 9 . e reflection of a positive or negative diagnosis on the probability that a patient has/not a particular disease could be investigated using Fagan's diagram . e Fagan's nomogram is frequently referring in the context of evidencebased medicine, reflecting the decision-making for a particular patient BIB007 . e Bayes' theorem nomogram was published in 2011, the method incorporating in the prediction of the posttest probability the following metrics: pretest probability, pretest odds (for and against), PLR or NLR, posttest odds (for and against), and posttest probability . e latest form of Fagan's nomogram, called two-step Fagan's nomogram, considered pretest probability, Se (Se of test for PLR), LRs, and Sp (Sp of test for NLR), in predicting the posttest probability BIB007 . Total on the rows represents the number of subjects with positive and respectively negative test results; total on the columns represents the number of subjects with (disease present) and respectively without (disease absent) the disease of interest; and the classification as test positive/test negative is done using the cutoff value for ordinal and continuous data. (ii) PLR (the higher, the better) (a) > 10 ⟶ convincing diagnostic evidence (b) 5 < PLR < 10 ⟶ strong diagnostic evidence Negative likelihood ratio (NLR/ LR−) * (1 − Se)/Sp (i) Indicates how much the odds of the disease decrease when a test is negative (indicator to rule-out) (ii) NLR (the lower, the better) (a) < 0.1 ⟶ convincing diagnostic evidence (b) 0.2 < PLR < 0.1 ⟶ strong diagnostic evidence 8 Computational and Mathematical Methods in Medicine e receiver operating characteristic (ROC) analysis is conducted to investigate the accuracy of a diagnostic test when the outcome is quantitative or ordinal with at least five classes BIB002 BIB001 . ROC analysis evaluates the ability of a diagnostic test to discriminate positive from negative cases. Several metrics are reported related to the ROC analysis in the evaluation of a diagnostic test, and the most frequently used metrics are described in Table 10 BIB003 BIB008 . e closest the left-upper corner of the graph, the better the test. Different metrics are used to choose the cutoff for the optimum Se and Sp, such as Youden's index (J, maximum),
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background Background. There is uncertainty about the diagnostic significance of specific symptoms of major depressive disorder (MDD). There is also interest in using one or two specific symptoms in the development of brief scales. Our aim was to elucidate the best possible specific symptoms that would assist in ruling in or ruling out a major depressive episode in a psychiatric out-patient setting. Method A total of 1523 psychiatric out-patients were evaluated in the Methods to Improve Diagnostic Assessment and Services (MIDAS) project. The accuracy and added value of specific symptoms from a comprehensive item bank were compared against the Structured Clinical Interview for DSM-IV (SCID). Results The prevalence of depression in our sample was 54.4%. In this high prevalence setting the optimum specific symptoms for ruling in MDD were psychomotor retardation, diminished interest/pleasure and indecisiveness. The optimum specific symptoms for ruling out MDD were the absence of depressed mood, the absence of diminished drive and the absence of loss of energy. However, some discriminatory items were relatively uncommon. Correcting for frequency, the most clinically valuable rule-in items were depressed mood, diminished interest/pleasure and diminished drive. The most clinically valuable rule-out items were depressed mood, diminished interest/pleasure and poor concentration. Conclusions The study supports the use of the questions endorsed by the two-item Patient Health Questionnaire (PHQ-2) with the additional consideration of the item diminished drive as a rule-in test and poor concentration as a rule-out test. The accuracy of these questions may be different in primary care studies where prevalence differs and when they are combined into multi-question tests or algorithmic models. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Summary Background Prostate cancer is one of the leading causes of death from malignant disease among men in the developed world. One strategy to decrease the risk of death from this disease is screening with prostate-specific antigen (PSA); however, the extent of benefit and harm with such screening is under continuous debate. Methods In December, 1994, 20 000 men born between 1930 and 1944, randomly sampled from the population register, were randomised by computer in a 1:1 ratio to either a screening group invited for PSA testing every 2 years (n=10 000) or to a control group not invited (n=10 000). Men in the screening group were invited up to the upper age limit (median 69, range 67–71 years) and only men with raised PSA concentrations were offered additional tests such as digital rectal examination and prostate biopsies. The primary endpoint was prostate-cancer specific mortality, analysed according to the intention-to-screen principle. The study is ongoing, with men who have not reached the upper age limit invited for PSA testing. This is the first planned report on cumulative prostate-cancer incidence and mortality calculated up to Dec 31, 2008. This study is registered as an International Standard Randomised Controlled Trial ISRCTN54449243. Findings In each group, 48 men were excluded from the analysis because of death or emigration before the randomisation date, or prevalent prostate cancer. In men randomised to screening, 7578 (76%) of 9952 attended at least once. During a median follow-up of 14 years, 1138 men in the screening group and 718 in the control group were diagnosed with prostate cancer, resulting in a cumulative prostate-cancer incidence of 12·7% in the screening group and 8·2% in the control group (hazard ratio 1·64; 95% CI 1·50–1·80; p Interpretation This study shows that prostate cancer mortality was reduced almost by half over 14 years. However, the risk of over-diagnosis is substantial and the number needed to treat is at least as high as in breast-cancer screening programmes. The benefit of prostate-cancer screening compares favourably to other cancer screening programs. Funding The Swedish Cancer Society, the Swedish Research Council, and the National Cancer Institute. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Both obesity and breast cancer incidence increased dramatically during two recent decades in a rapidly changing society in northern Iran. In this study, we examined the ability of body mass index (BMI) and waist circumference (WC) as predictor biomarkers of breast cancer risk in Iranian women. In a case–control study of 100 new cases of histological confirmed breast cancer and 200 age-matched controls, in Babol, we measured weight, height, waist and hip circumference at time of diagnosis with standard methods. The data of demographic, characteristics, reproductive and lifestyle factors were collected by interview. We used both regression and receiver operator characteristics (ROC) analysis to estimate the predictive ability of BMI and WC for breast cancer as estimated by area under the curve (AUC). The results showed a significant difference in the mean of weight, BMI and WC between patients and controls in pre- and postmenopausal women (P < 0.001). While after adjusting for BMI, no longer an association between WC and breast cancer was found. The overall accuracy of observed BMI and WC were 0.79 (95% CI: 0.74–0.84) and 0.68 (95% CI: 0.61–0.74), respectively. The accuracy of BMI and WC were 0.82 (95% CI: 0.76–0.89), 0.75(0.67–0.83) for premenopausal and 0.77(0.68–0.85), 0.60 (0.50–0.71) for postmenopausal women, respectively. BMI and WC are predictor biomarkers of breast cancer risk in both pre- and postmenopausal Iranian women while after adjusting for BMI, no longer an association between WC and breast cancer was observed. These findings imply to perform breast cancer screening program in women with a higher BMI and WC. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background ::: In most of the world, microbiologic diagnosis of tuberculosis (TB) is limited to microscopy. Recent guidelines recommend culture-based diagnosis where feasible. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Objective ::: To evaluate the diagnostic performance of digital breast tomosynthesis (DBT) and digital mammography (DM) for benign and malignant lesions in breasts. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background. Diagnostic evaluations of dementia are often performed in primary health care (PHC). Cognitive evaluation requires validated instruments. Objective. To investigate the diagnostic accuracy and clinical utility of Cognistat in a primary care population. Methods. Participants were recruited from 4 PHC centres; 52 had cognitive symptoms and 29 were presumed cognitively healthy. Participants were tested using the Mini-Mental State Examination (MMSE), the Clock Drawing Test (CDT), and Cognistat. Clinical diagnoses, based on independent neuropsychological examination and a medical consensus discussion in secondary care, were used as criteria for diagnostic accuracy analyses. Results. The sensitivity, specificity, positive predictive value, and negative predictive value were 0.85, 0.79, 0.85, and 0.79, respectively, for Cognistat; 0.59, 0.91, 0.90, and 0.61 for MMSE; 0.26, 0.88, 0.75, and 0.46 for CDT; 0.70, 0.79, 0.82, and 0.65 for MMSE and CDT combined. The area under the receiver operating characteristic curve was 0.82 for Cognistat, 0.75 for MMSE, 0.57 for CDT, and 0.74 for MMSE and CDT combined. Conclusions. The diagnostic accuracy and clinical utility of Cognistat was better than the other tests alone or combined. Cognistat is well adapted for cognitive evaluations in PHC and can help the general practitioner to decide which patients should be referred to secondary care. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> BACKGROUND ::: Several instruments have been developed to screen Parkinson's disease (PD); yet, there is no consensus on the items, number of questions, and diagnostic accuracy. We aimed to develop a new questionnaire combining the best items with highest validity to screen parkinsonism and to compare its diagnostic value with that of the previous instruments using the same database. ::: ::: ::: METHODS ::: 157 patients with parkinsonism and 110 healthy controls completed a comprehensive screening questionnaire consisting of 25 items on different PD symptoms used in previous studies. To select the optimal items, clinical utility index (CUI) was calculated and those who met at least good negative utility (CUI ≥0.64) were selected. Receiver operating characteristics (ROC) curves analysis was used to compare the area under the curve (AUC) of different screening instruments. ::: ::: ::: RESULTS ::: Six items on 'stiffness & rigidity', 'tremor & shaking', 'troublesome buttoning', 'troublesome arm swing', 'feet stuck to floor' and 'slower daily activity' demonstrated good CUI. The new screening instrument had the largest AUC (0.977) compared to other instruments. ::: ::: ::: CONCLUSIONS ::: We selected a new set of six items to screen parkinsonism, which showed higher diagnostic values compared to the previously developed questionnaires. This screening instrument could be used in population-based PD surveys in poor-resource settings. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Abstract Objective To analyse the evidence concerning the accuracy of the Mini-Mental State Examination (MMSE) as a diagnostic and screening test for the presence of delirium in adults. Method Two authors searched MEDLINE, PsychINFO and EMBASE from inception till March 2014. Articles were included that investigated the diagnostic validity of the MMSE to detect delirium against standardised criteria. A diagnostic validity meta-analysis was conducted. Results Thirteen studies were included representing 2017 patients in medical settings of whom 29.4% had delirium. The meta-analysis revealed the MMSE had an overall sensitivity and specificity estimate of 84.1% and 73.0%, but this was 81.1% and 82.8% in a subgroup analysis involving robust high quality studies. Sensitivity was unchanged but specificity was 68.4% (95% CI=50.9–83.5%) in studies using a predefined cutoff of Conclusion The MMSE cannot be recommended as a case-finding confirmatory test of delirium, but may be used as an initial screen to rule out high scorers who are unlikely to have delirium with approximately 93% accuracy. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background: Great concern about occupational exposure to chromium (Cr [VI]) has been reported due to escalated risk of lung cancer in exposed workers. Consequences of occupational exposure to Cr (VI) have been reported as oxidative stress and lung tissue damage. Objective: To investigate the feasibility of biological effect monitoring of chrome electroplaters through analysis of serum malondialdehyde (MDA). Methods: 90 workers directly involved in chrome electroplating—categorized into three equal groups based on their job as near bath workers, degreaser, and washers—and 30 workers without exposure to Cr (VI), served as the control group, were studied. Personal samples were collected and analyzed according to NIOSH method 7600. Serum MDA level was measured by HPLC using a UV detector. Results: Median Cr (VI) exposure level was 0.38 mg/m 3 in near bath workers, 0.20 mg/m 3 in degreasers, and 0.05 mg/m 3 in washers. The median serum MDA level of three exposed groups (2.76 μmol/L) was significantly (p<0.001) higher than that in the control group (2.00 μmol/L). There was a positive correlation between electroplaters' level of exposure to Cr (VI) and their serum MDA level (Spearman's ρ 0.806, p<0.001). Conclusion: Serum MDA level is a good biomarker for the level of occupational exposure to Cr (VI) in electroplaters. <s> BIB009 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Objective ::: Systemic inflammatory response syndrome (SIRS)-based severe sepsis screening algorithms have been utilised in stratification and initiation of early broad spectrum antibiotics for patients presenting to EDs with suspected sepsis. We aimed to investigate the performance of some of these algorithms on a cohort of suspected sepsis patients. ::: ::: Methods ::: We conducted a retrospective analysis on an ED-based prospective sepsis registry at a tertiary Sydney hospital, Australia. Definitions for sepsis were based on the 2012 Surviving Sepsis Campaign guidelines. Numerical values for SIRS criteria and ED investigation results were recorded at the trigger of sepsis pathway on the registry. Performance of specific SIRS-based screening algorithms at sites from USA, Canada, UK, Australia and Ireland health institutions were investigated. ::: ::: Results ::: Severe sepsis screening algorithms' performance was measured on 747 patients presenting with suspected sepsis (401 with severe sepsis, prevalence 53.7%). Sensitivity and specificity of algorithms to flag severe sepsis ranged from 20.2% (95% CI 16.4–24.5%) to 82.3% (95% CI 78.2–85.9%) and 57.8% (95% CI 52.4–63.1%) to 94.8% (95% CI 91.9–96.9%), respectively. Variations in SIRS values between uncomplicated and severe sepsis cohorts were only minor, except a higher mean lactate (>1.6 mmol/L, P < 0.01). ::: ::: Conclusions ::: We found the Ireland and JFK Medical Center sepsis algorithms performed modestly in stratifying suspected sepsis patients into high-risk groups. Algorithms with lactate levels thresholds of >2 mmol/L rather than >4 mmol/L performed better. ED sepsis registry-based characterisation of patients may help further refine sepsis definitions of the future. <s> BIB010 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> BACKGROUND ::: There is a lack of studies testing accuracy of fast screening methods for alcohol use disorder in mental health settings. We aimed at estimating clinical utility of a standard single-item test for case finding and screening of DSM-5 alcohol use disorder among individuals suffering from anxiety and mood disorders. ::: ::: ::: METHODS ::: We recruited adults consecutively referred, in a 12-month period, to an outpatient clinic for anxiety and depressive disorders. We assessed the National Institute on Alcohol Abuse and Alcoholism (NIAAA) single-item test, using the Mini- International Neuropsychiatric Interview (MINI), plus an additional item of Composite International Diagnostic Interview (CIDI) for craving, as reference standard to diagnose a current DSM-5 alcohol use disorder. We estimated sensitivity and specificity of the single-item test, as well as positive and negative Clinical Utility Indexes (CUIs). ::: ::: ::: RESULTS ::: 242 subjects with anxiety and mood disorders were included. The NIAAA single-item test showed high sensitivity (91.9%) and specificity (91.2%) for DSM-5 alcohol use disorder. The positive CUI was 0.601, whereas the negative one was 0.898, with excellent values also accounting for main individual characteristics (age, gender, diagnosis, psychological distress levels, smoking status). ::: ::: ::: DISCUSSION ::: Testing for relevant indexes, we found an excellent clinical utility of the NIAAA single-item test for screening true negative cases. Our findings support a routine use of reliable methods for rapid screening in similar mental health settings. <s> BIB011 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> BACKGROUND ::: Chitotriosidase is an enzyme secreted by activated macrophages. This study aims to investigate the usefulness of circulating chitotriosidase activity as a marker of inflammatory status in patients with critical limb ischemia (CLI). ::: ::: ::: MATERIALS AND METHODS ::: An observational gender-matched case-control study was conducted on patients hospitalized with the primary diagnosis of CLI, as well as a control group. The control group consisted of healthy volunteers. ::: ::: ::: RESULTS ::: Forty-three patients were included in each group. Similar demographic characteristics (median age of 60-62 years and overweight) were observed in both groups. Chitotriosidase activity ranged from 110 nmol/ml/hr to 1530 nmol/ml/hr in the CLI group and from 30 nmol/ml/hr to 440 nmol/ml/hr in the control group; demonstrating significantly elevated values in the CLI group (p<0.001). Median plasma chitotriosidase activity was significantly elevated in smokers compared with non-smokers in both groups (p<0.05). However, this activity had higher values in CLI than in control subjects. Receiver operating characteristic (ROC) analysis was then performed in order to verify the diagnostic accuracy of chitotriosidase as an inflammatory biomarker in CLI. ::: ::: ::: CONCLUSION ::: Circulating chitotriosidase is a test which can potentially be used for the monitoring of CLI patients without other inflammatory conditions. However, the interpretation of elevated values must take into account the inflammatory response induced by tobacco exposure. <s> BIB012 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> “What can be asserted without evidence can also be dismissed without evidence.” —Christopher Hitchens [1949–2011], British journalist and writer (1). <s> BIB013 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> BACKGROUND AND OBJECTIVES ::: Previous limited experiences have reported the 19-gauge flexible needle to be highly effective in performing endoscopic ultrasound-guided fine needle biopsy (EUS-FNB) for transduodenal lesions. We designed a large multicenter prospective study with the aim at evaluating the performance of this newly developed needle. ::: ::: ::: PATIENTS AND METHODS ::: Consecutive patients with solid lesions who needed to undergo EUS sampling from the duodenum were enrolled in 6 tertiary care referral centers. Puncture of the lesion was performed with the 19-gauge flexible needle (Expect™ and Slimline Expect™ 19 Flex). The feasibility, procurement yield, and diagnostic accuracy were evaluated. ::: ::: ::: RESULTS ::: Totally, 246 patients (144 males, mean age 65.1 ± 12.7 years) with solid lesions (203 cases) or enlarged lymph nodes (43 cases) were enrolled, with a mean size of 32.6 ± 12.2 mm. The procedure was technically feasible in 228 patients, with an overall procurement yield of 76.8%. Two centers had suboptimal procurement yields (66.7% and 64.2%). Major complications occurred in six cases: two of bleeding, two of mild acute pancreatitis, one perforation requiring surgery, and one duodenal hematoma. Considering malignant versus nonmalignant disease, the sensitivity, specificity, positive/negative likelihood ratios, and diagnostic accuracy were 70.7% (95% confidence interval [CI]: 64.3-76.6), 100% (95% CI: 79.6-100), 35.3 (95% CI: 2.3-549.8)/0.3 (95% CI: 0.2-0.4), and 73.6% (95% CI: 67.6-79). On multivariate analysis, the only determinant of successful EUS-FNB was the center in which the procedure was performed. ::: ::: ::: CONCLUSIONS ::: Our results suggest that the use of the 19-gauge flexible needle cannot be widely advocated and its implementation should receive local validation after careful evaluation of both the technical success rates and diagnostic yield. <s> BIB014 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Purpose ::: To evaluate the diagnostic value of integrated positron emission tomography/magnetic resonance imaging (PET/MRI) compared with conventional multiparametric MRI and PET/computed tomography (CT) for the detailed and accurate segmental detection/localization of prostate cancer. ::: ::: Materials and Methods ::: Thirty-one patients who underwent integrated PET/MRI using 18F-choline and 18F-FDG with an integrated PET/MRI scanner followed by radical prostatectomy were included. The prostate was divided into six segments (sextants) according to anatomical landmarks. Three radiologists noted the presence and location of cancer in each sextant on four different image interpretation modalities in consensus (1, multiparametric MRI; 2, integrated 18F-FDG PET/MRI; 3, integrated 18F-choline PET/MRI; and 4, combined interpretation of 1 and 18F-FDG PET/CT). Sensitivity, specificity, accuracy, positive and negative predictive values, likelihood ratios, and diagnostic performance based on the DOR (diagnostic odds ratio) and NNM (number needed to misdiagnose) were evaluated for each interpretation modality, using the pathologic result as the reference standard. Detection rates of seminal vesicle invasion and extracapsular invasion were also evaluated. ::: ::: Results ::: Integrated 18F-choline PET/MRI showed significantly higher sensitivity than did multiparametric MRI alone in high Gleason score patients (77.0% and 66.2%, P = 0.011), low Gleason score patients (66.7% and 47.4%, P = 0.007), and total patients (72.5% and 58.0%, P = 0.008) groups. Integrated 18F-choline PET/MRI and 18F-FDG PET/MRI showed similar sensitivity and specificity to combined interpretation of multiparametric MRI and 18F-FDG PET/CT (for sensitivity, 58.0%, 63.4%, 72.5%, and 68.7%, respectively, and for specificity, 87.3%, 80.0%, 81.8%, 72.7%, respectively, in total patient group). However, integrated 18F-choline PET/MRI showed the best diagnostic performance (as DOR, 11.875 in total patients, 27.941 in high Gleason score patients, 5.714 in low Gleason score groups) among the imaging modalities, regardless of Gleason score. Integrated 18F-choline PET/MRI showed higher sensitivity and diagnostic performance than did integrated 18F-FDG PET/MRI (as DOR, 6.917 in total patients, 15.143 in high Gleason score patients, 3.175 in low Gleason score groups) in all three patient groups. ::: ::: Conclusion ::: Integrated PET/MRI carried out using a dedicated integrated PET/MRI scanner provides better sensitivity, accuracy, and diagnostic value for detection/localization of prostate cancer compared to multiparametric MRI. Generally, integrated 18F-choline PET/MRI shows better sensitivity, accuracy, and diagnostic performance than does integrated 18F-FDG PET/MRI as well as combined interpretation of multiparametric MRI with 18F-FDG PET/CT. J. Magn. Reson. Imaging 2016. <s> BIB015 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Biliary atresia is a progressive infantile cholangiopathy of complex pathogenesis. Although early diagnosis and surgery are the best predictors of treatment response, current diagnostic approaches are imprecise and time-consuming. We used large-scale, quantitative serum proteomics at the time of diagnosis of biliary atresia and other cholestatic syndromes (serving as disease controls) to identify biomarkers of disease. In a discovery cohort of 70 subjects, the lead biomarker was matrix metalloproteinase-7 (MMP-7), which retained high distinguishing features for biliary atresia in two validation cohorts. Notably, the diagnostic performance reached 95% when MMP-7 was combined with γ-glutamyltranspeptidase (GGT), a marker of cholestasis. Using human tissue and an experimental model of biliary atresia, we found that MMP-7 is primarily expressed by cholangiocytes, released upon epithelial injury, and promotes the experimental disease phenotype. Thus, we propose that serum MMP-7 (alone or in combination with GGT) is a diagnostic biomarker for biliary atresia and may serve as a therapeutic target. <s> BIB016 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Pleural or abdominal effusions are frequent findings in ICU and Internal Medicine patients. Diagnostic gold standard to distinguish between transudate and exudate is represented by “Light’s Criteria,” but, unfortunately, the chemical–physical examination for their calculation is not a rapid test. Pursuing an acid–base assessment of the fluid by a blood-gas analyzer, an increase of lactate beyond the normal serum range is reported in the exudative effusions. The advantages of this test are that it is a very fast bed-side test, executable directly by the physician. The aim of this study is to evaluate whether the increase in lactate in pleural and abdominal effusions might be used as a criterion for the differential diagnosis of the nature of the fluid. Sixty-nine patients with pleural or abdominal effusions and clinical indication for thoracentesis or paracentesis were enrolled. Acid–base assessment with lactate, total protein, and LDH dosage on the serum, and acid–base assessment with lactate, total protein, and LDH dosage, cytology, and bacterial culture on the fluid were performed to each patient. Fluid–blood lactate difference (ΔLacFB) and fluid–blood lactate ratio (LacFB ratio) were calculated. A statistical analysis was carried out for fluid lactate (LacF), ΔLacFB, and LacFB ratio, performing ROC curves to find the cut-off values with best sensitivity (Sn) and specificity (Sp) predicting an exudate diagnosis: LacF: cut-off value: 2.4 mmol/L; AU-ROC 0.854 95% CI 0.756–0.952; Sn 0.77; Sp 0.84. ΔLacFB: cut-off value: 0.95 mmol/L; Au-ROC 0.876 95% CI 0.785–0.966; Sn 0.80; Sp 0.92. LacFB ratio: cut-off value: 2 mmol/L; Au-ROC 0.730 95% CI 0.609–0.851; Sn 0.74; Sp 0.65. Lactate dosage by blood-gas analyzer on pleural and abdominal effusions seems to be a promising tool to predict a diagnosis of exudate. <s> BIB017 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background Stroke-associated pneumonia is a leading cause of in-hospital death and post-stroke outcome. Screening patients at high risk is one of the main challenges in acute stroke units. Several screening tests have been developed, but their feasibility and validity still remain unclear. Objective The aim of our study was to evaluate the validity of four risk scores (Pneumonia score, A2DS2, ISAN score, and AIS-APS) in a population of ischemic stroke patients admitted in a French stroke unit. Methods Consecutive ischemic stroke patients admitted to a stroke unit were retrospectively analyzed. Data that allowed to retrospectively calculate the different pneumonia risk scores were recorded. Sensitivity and specificity of each score were assessed for in-hospital stroke-associated pneumonia and mortality. The qualitative and quantitative accuracy and utility of each diagnostic screening test were assessed by measuring the Youden Index and the Clinical Utility Index. Results Complete data were available for only 1960 patients. Pneumonia was observed in 8.6% of patients. Sensitivity and specificity were, respectively, .583 and .907 for Pneumonia score, .744 and .796 for A2DS2, and .696 and .812 for ISAN score. Data were insufficient to test AIS-APS. Stroke-associated pneumonia risk scores had an excellent negative Clinical Utility Index (.77-.87) to screen for in-hospital risk of pneumonia after acute ischemic stroke. Conclusion All scores might be useful and applied to screen stroke-associated pneumonia in stroke patients treated in French comprehensive stroke units. <s> BIB018 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> CONTEXT ::: India is currently becoming capital for diabetes mellitus. This significantly increasing incidence of diabetes putting an additional burden on health care in India. Unfortunately, half of diabetic individuals are unknown about their diabetic status. Hence, there is an emergent need of effective screening instrument to identify "diabetes risk" individuals. ::: ::: ::: AIMS ::: The aim is to evaluate and compare the diagnostic accuracy and clinical utility of Indian Diabetes Risk Score (IDRS) and Finnish Diabetes Risk Score (FINDRISC). ::: ::: ::: SETTINGS AND DESIGN ::: This is retrospective, record-based study of diabetes detection camp organized by a teaching hospital. Out of 780 people attended this camp voluntarily only 763 fulfilled inclusion criteria of the study. ::: ::: ::: SUBJECTS AND METHODS ::: In this camp, pro forma included the World Health Organization STEP guidelines for surveillance of noncommunicable diseases. Included primary sociodemographic characters, physical measurements, and clinical examination. After that followed the random blood glucose estimation of each individual. ::: ::: ::: STATISTICAL ANALYSIS USED ::: Diagnostic accuracy of IDRS and FINDRISC compared by using receiver operative characteristic curve (ROC). Sensitivity, specificity, likelihood ratio, positive predictive and negative predictive values were compared. Clinical utility index (CUI) of each score also compared. SPSS version 22, Stata 13, R3.2.9 used. ::: ::: ::: RESULTS ::: Out of 763 individuals, 38 were new diabetics. By IDRS 347 and by FINDRISC 96 people were included in high-risk category for diabetes. Odds ratio for high-risk people in FINDRISC for getting affected by diabetes was 10.70. Similarly, it was 4.79 for IDRS. Area under curves of ROCs of both scores were indifferent (P = 0.98). Sensitivity and specificity of IDRS was 78.95% and 56.14%; whereas for FINDRISC it was 55.26% and 89.66%, respectively. CUI was excellent (0.86) for FINDRISC while IDRS it was "satisfactory" (0.54). Bland-Altman plot and Cohen's Kappa suggested fair agreement between these score in measuring diabetes risk. ::: ::: ::: CONCLUSIONS ::: Diagnostic accuracy and clinical utility of FINDRISC is fairly good than IDRS. <s> BIB019 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Abstract Purpose Up to 60% of people with epilepsy (PwE) have psychiatric comorbidity including anxiety. Anxiety remains under recognized in PwE. This study investigates if screening tools validated for depression could be used to detect anxiety disorders in PWE. Additionally it analyses the effect of anxiety on QoL. Method 261 participants with a confirmed diagnosis of epilepsy were included. Neurological Disorders Depression Inventory for Epilepsy (NDDI-E) and Emotional Thermometers (ET), both validated to screen for depression were used. Hospital Anxiety and Depression Scale-Anxiety (HADS-A) with a cut off for moderate and severe anxiety was used as the reference standard. QoL was measured with EQ5-D. Sensitivity, specificity, positive and negative predictive value and ROC analysis as well as multivariate regression analysis were performed. Results Patients with depression (n=46) were excluded as multivariate regression analysis showed that depression was the only significant determinant of having anxiety in the group. Against HADS-A, NDDI-E and ET-7 showed highest level of accuracy in recognizing anxiety with ET7 being the most effective tool. QoL was significantly reduced in PwE and anxiety. Conclusion Our study showed that reliable screening for moderate to severe anxiety in PwE without co-morbid depression is feasible with screening tools for depression. The cut off values for anxiety are different from those for depression in ET7 but very similar in NDDI-E. ET7 can be applied to screen simultaneously for depression and "pure" anxiety. Anxiety reduces significantly QoL. We recommend screening as an initial first step to rule out patients who are unlikely to have anxiety. <s> BIB020 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background ::: A clinical and research challenge is to identify which depressed youth are at risk of "early transition to bipolar disorders (ET-BD)." This 2-part study (1) examines the clinical utility of previously reported BD at-risk (BAR) criteria in differentiating ET-BD cases from unipolar depression (UP) controls; and (2) estimates the Number Needed to Screen (NNS) for research and general psychiatry settings. ::: ::: ::: Methods ::: Fifty cases with reliably ascertained, ET-BD I and II cases were matched for gender and birth year with 50 UP controls who did not develop BD over 2 years. We estimated the clinical utility for finding true cases and screening out non-cases for selected risk factors and their NNS. Using a convenience sample (N = 80), we estimated the NNS when adjustments were made to account for data missing from clinical case notes. ::: ::: ::: Results ::: Sub-threshold mania, cyclothymia, family history of BD, atypical depression symptoms and probable antidepressant-emergent elation, occurred significantly more frequently in ET-BD youth. Each of these "BAR-Depression" criteria demonstrated clinical utility for screening out non-cases. Only cyclothymia demonstrated good utility for case finding in research settings; sub-threshold mania showed moderate utility. In the convenience sample, the NNS for each criterion ranged from ~4 to 7. ::: ::: ::: Conclusions ::: Cyclothymia showed the optimum profile for case finding, screening and NNS in research settings. However, its presence or absence was only reported in 50% of case notes. Future studies of ET-BD instruments should distinguish which criteria have clinical utility for case finding vs screening. <s> BIB021 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> BACKGROUND ::: Experts in the autoimmune paraneoplastic field recommend autoantibody testing as "panels" to improve the poor sensitivity of individual autoantibodies in detecting paraneoplastic neurological syndromes (PNS). The sensitivity of those panels was not reported to date in a fashion devoid of incorporation bias. We aimed to assess the collective sensitivity and specificity of one of the commonly used panels in detecting PNS. ::: ::: ::: METHODS ::: A single-centered retrospective cohort of all patients tested for paraneoplastic evaluation panel (PAVAL; test ID: 83380) over one year for the suspicion of PNS. Case adjudication was based on newly proposed diagnostic criteria in line with previously published literature, but modified to exclude serological status to avoid incorporation bias. Measures of diagnostic accuracy were subsequently calculated. Cases that failed to show association with malignancy within the follow-up time studied, reflecting a possibly pure autoimmune process was considered paraneoplastic-like syndromes. ::: ::: ::: RESULTS ::: Out of 321 patients tested, 51 patients tested positive. Thirty-two patients met diagnostic criteria for paraneoplastic/paraneoplastic-like syndromes. The calculated collective sensitivity was 34% (95% CI: 17-53), specificity was 86% (95% CI: 81-90), Youden's index 0.2 and a positive clinical utility index 0.07 suggesting poor utility for case-detection. ::: ::: ::: CONCLUSION ::: This is the first reported diagnostic accuracy measures of paraneoplastic panels without incorporation bias. Despite recommended panel testing to improve detection of PNS, sensitivity remains low with poor utility for case-detection. The high-calculated specificity suggests a possible role in confirming the condition in difficult cases suspicious for PNS, when enough supportive evidence is lacking on ancillary testing. <s> BIB022 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Background ::: The current prevalence of the condition is not yet known. No screening tool for the condition exists. By developing a questionnaire that may be used by community health workers, the study is intended to be the first step in identifying the prevalence of X-linked dystonia parkinsonism (XDP). ::: ::: Aim ::: To develop and validate a simple, easy to use, community-based, screening questionnaire for the diagnosis of XDP ::: ::: Methods ::: Community health workers administered an 11-item yes/no questionnaire, in the native Panay island language on 54 genetically-confirmed XDP patients and 54 healthy controls all from the island of Panay. The questionnaire is made up of elements from existing questionnaires on Parkinson's disease and dystonia, and known clinical features of XDP. The subjects were partitioned into training (n= 88) and test (n= 20) data sets. To select which items were predictive of XDP the Clinical Utility Index (CUI) of each item was determined. Afterwards, multivariable binary logistic regression was done to build a predictive model that was subsequently run on the test data set. ::: ::: Results ::: Four items on ‘sustained twisting’, ‘jaw opening and closing’, ‘slowness in movement’ and ‘shuffling steps’ were found to be the most predictive of XDP. All had at least a ‘good’ CUI. The questions demonstrated 100% sensitivity and 90% specificity (95% CI: 65.6-100%) in identifying XDP suspects. ::: ::: Conclusion ::: The resulting 4-item questionnaire was found to be predictive of XDP. The screening instrument can be used to screen for XDP in a large-scale population-based prevalence study. ::: ::: This article is protected by copyright. All rights reserved. <s> BIB023 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> OBJECTIVE ::: This study examined whether previously reported results, indicating that prostate-specific antigen (PSA) screening can reduce prostate cancer (PC) mortality regardless of sociodemographic inequality, could be corroborated in an 18 year follow-up. ::: ::: ::: MATERIALS AND METHODS ::: In 1994, 20,000 men aged 50-64 years were randomized from the Göteborg population register to PSA screening or control (1:1) (study ID: ISRCTN54449243). Men in the screening group (n = 9950) were invited for biennial PSA testing up to the median age of 69 years. Prostate biopsy was recommended for men with PSA ≥2.5 ng/ml. Last follow-up was on 31 December 2012. ::: ::: ::: RESULTS ::: In the screening group, 77% (7647/9950) attended at least once. After 18 years, 1396 men in the screening group and 962 controls had been diagnosed with PC [hazard ratio 1.51, 95% confidence interval (CI) 1.39-1.64]. Cumulative PC mortality was 0.98% (95% CI 0.78-1.22%) in the screening group versus 1.50% (95% CI 1.26-1.79%) in controls, an absolute reduction of 0.52% (95% CI 0.17-0.87%). The rate ratio (RR) for PC death was 0.65 (95% CI 0.49-0.87). To prevent one death from PC, the number needed to invite was 231 and the number needed to diagnose was 10. Systematic PSA screening demonstrated greater benefit in PC mortality for men who started screening at age 55-59 years (RR 0.47, 95% CI 0.29-0.78) and men with low education (RR 0.49, 95% CI 0.31-0.78). ::: ::: ::: CONCLUSIONS ::: These data corroborate previous findings that systematic PSA screening reduces PC mortality and suggest that systematic screening may reduce sociodemographic inequality in PC mortality. <s> BIB024 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> BACKGROUND ::: The diagnosis of pediatric septic arthritis (SA) can be challenging due to wide variability in the presentation of musculoskeletal infection. Synovial fluid Gram stain is routinely obtained and often used as an initial indicator of the presence or absence of pediatric SA. The purpose of this study was to examine the clinical utility of the Gram stain results from a joint aspiration in the diagnosis and management of pediatric SA. ::: ::: ::: METHODS ::: All patients with suspected SA who underwent arthrocentesis and subsequent surgical irrigation and debridement at an urban tertiary care children's hospital between January 2007 and October 2016 were identified. Results of the synovial fluid Gram stain, as well as synovial cell count/differential and serum markers, were evaluated. ::: ::: ::: RESULTS ::: A total of 302 patients that underwent incision and drainage for suspected SA were identified. In total, 102 patients (34%) had positive synovial fluid cultures and 47 patients (16%) had a microorganism detected on Gram stain. Gram stain sensitivity and specificity for the detection of SA were 0.40 and 0.97, respectively. This yielded a number needed to misdiagnose of 4.5 (ie, every fifth patient was misdiagnosed by Gram stain). For gram-negative organisms, the sensitivity dropped further to 0.13, with only 2/16 gram-negative organisms identified on Gram stain. Stepwise regression showed that age, serum white blood cell, and absolute neutrophil count were significant independent predictors for having a true positive Gram stain result. Elevated synovial white blood cell count was a significant predictor of having an accurate (culture matching the Gram stain) result. ::: ::: ::: CONCLUSIONS ::: The Gram stain result is a poor screening tool for the detection of SA and is particularly ineffective for the detection of gram-negative organisms. The clinical relevance of the Gram stain and cost-effectiveness of this test performed on every joint aspiration sent for culture requires additional evaluation. Patients with gram-negative SA may be at high risk for inadequate coverage with empiric antibiotics due to poor detection of gram-negative organisms on initial Gram stain. ::: ::: ::: LEVEL OF EVIDENCE ::: Level III-case-control study. <s> BIB025 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Purpose ::: Accurate pain assessment is critical to detect pain and facilitate effective pain management in dementia patients. The electronic Pain Assessment Tool (ePAT) is a point-of-care solution that uses automated facial analysis in conjunction with other clinical indicators to evaluate the presence and intensity of pain in patients with dementia. This study aimed to examine clini-metric properties (clinical utility and predictive validity) of the ePAT in this population group. ::: ::: ::: Methods ::: Data were extracted from a prospective validation (observational) study of the ePAT in dementia patients who were ≥65 years of age, living in a facility for ≥3 months, and had Psychogeriatric Assessment Scales - cognitive scores ≥10. The study was conducted in two residential aged-care facilities in Perth, Western Australia, where residents were sampled using purposive convenience strategy. Predictive validity was measured using accuracy statistics (sensitivity, specificity, positive predictive value, and negative predictive value). Positive and negative clinical utility index (CUI) scores were calculated using Mitchell's formula. Calculations were based on comparison with the Abbey Pain Scale, which was used as a criterion reference. ::: ::: ::: Results ::: A total of 400 paired pain assessments for 34 residents (mean age 85.5±6.3 years, range 68.0-93.2 years) with moderate-severe dementia (Psychogeriatric Assessment Scales - cognitive score 11-21) were included in the analysis. Of those, 303 episodes were classified as pain by the ePAT based on a cutoff score of 7. Unadjusted prevalence findings were sensitivity 96.1% (95% CI 93.9%-98.3%), specificity 91.4% (95% CI 85.7%-97.1%), accuracy 95.0% (95% CI 92.9%-97.1%), positive predictive value 97.4% (95% CI 95.6%-99.2%), negative predictive value 87.6% (95% CI 81.1%-94.2%), CUI+ 0.936 (95% CI 0.911-0.960), CUI- 0.801 (95% CI 0.748-0.854). ::: ::: ::: Conclusion ::: The clinimetric properties demonstrated were excellent, thus supporting the clinical usefulness of the ePAT when identifying pain in patients with moderate-severe dementia. <s> BIB026 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Performances of a Diagnostic Test by Examples. <s> Abstract Introduction Age and years of education influence the risk of dementia and may impact the prognostic accuracy of mild cognitive impairment subtypes. Methods Memory clinic patients without dementia (N = 358, age 64.0 ± 7.9) were stratified into four groups based on years of age (≤64 and ≥65) and education (≤12 and ≥13), examined with a neuropsychological test battery at baseline and followed up after 2 years. Results The prognostic accuracy of amnestic multi-domain mild cognitive impairment for dementia was highest in younger patients with more years of education and lowest in older patients with fewer years of education. Conversely, conversion rates to dementia were lowest in younger patients with more years of education and highest in older patients with fewer years of education. Discussion Mild cognitive impairment subtypes and demographic information should be combined to increase the accuracy of prognoses for dementia. <s> BIB027
e body mass index (BMI) was identified as a predictor marker of breast cancer risk on Iranian population BIB003 , with an AUC 0.79 (95% CI: 0.74 to 0.84). A simulation dataset was used to illustrate how the performances of a diagnostic test could be evaluated, evaluating the BMI as a marker for breast cancer. , and the ROC curve with associated AUC is presented in Figure 1 . e ROC curve graphically represents the pairs of Se and (1 − Sp) for different cutoff values. e AUC of 0.825 proved significantly different by 0.5 (p < 0.001), and the point estimator indicates a good accuracy, but if the evaluation is done based on the interpretation of the 95% lower bound, we (i) Gives the odds that the patient has to the target disorder after the test is carried out (ii) Gives the proportion of patients with that particular test result who have the target disorder All indicators excepting J are reported with associated 95% confidence intervals; ROC � receiver-operating characteristic; * patient-centered indicator; TP � true positive; FP � false positive; FN � false negative; TN � true negative; and PPV and NPV depend on the prevalence (to be used only if (no. of subjects with disease)/(no. of patients without disease) is equivalent with the prevalence of the disease in the studied population). (Table 11) . A cutoff with a low value is chosen whenever the aim is to minimize the number of false negatives, assuring a Se of 1 (19.5 kg/m 2 , TP � 100, Table 10 ). If a test able to correctly classify the true negatives is desired, the value of the cutoff must be high (38.5 kg/m 2 , TN � 200, Table 11 ) assuring a Sp of 1. e analysis of the performance metrics for our simulation dataset showed that the maximum CUI+ and CUI− values are obtained for the cutoff value identified by the J index, supporting the usefulness of the BMI for screening not for case finding. e accuracy analysis is reported frequently in the scientific literature both in primary and secondary studies. Different actors such as the authors, reviewers, and editors could contribute to the quality of the statistics reported. e evaluation of plasma chitotriosidase as a biomarker in critical Partial area under the curve (pAUC) (i) Nonparametric (no assumptions) (ii) Parametric: using the binomial assumption limb ischemia reported the AUC with associated 95% confidence intervals, cutoff values BIB012 , but no information on patient-centered metrics or utility indications are provided. Similar parameters as reported by Ciocan et al. BIB012 have also been reported in the evaluation of sonoelastographic scores in the differentiation of benign by malign cervical lymph nodes . Lei et al. conducted a secondary study to evaluate the accuracy of the digital breast tomosynthesis versus digital mammography to discriminate between malign and benign breast lesions and correctly reported Se, Sp, PLR, NLR, and DOR for both the studies included in the analysis and the pooled value BIB005 . However, insufficient details are provided in regard to ROC analysis (e.g., no AUCs confidence intervals are reported) or any utility index BIB005 . Furthermore, Lei et al. reported the Q * index which reflect the point on the SROC (summary receiver operating characteristic curve) at which the Se is equal with Sp that could be useful in specific clinical situations BIB005 . e number needed to diagnose (NND) and number needed to misdiagnose (NNM) are currently used in the identification of the cutoff value on continuous diagnostic test results , in methodological articles, or teaching materials BIB013 . e NND and NNM are less frequently reported in the evaluation of the accuracy of a diagnostic test. Several examples identified in the available scientific literature are as follows: color duplex ultrasound in the diagnosis of carotid stenosis , culture-based diagnosis of tuberculosis BIB004 , prostate-specific antigen BIB024 , endoscopic ultrasound-guided fine needle biopsy with 19-gauge flexible needle BIB014 , number needed to screen-prostate cancer BIB002 , the integrated positron emission tomography/magnetic resonance imaging (PET/ MRI) for segmental detection/localization of prostate cancer BIB015 , serum malondialdehyde in the evaluation of exposure to chromium BIB009 , the performances of the matrix metalloproteinase-7 (MMP-7) in the diagnosis of epithelial injury and of biliary atresia BIB016 , lactate as a diagnostic marker of pleural and abdominal exudate BIB017 , the Gram stain from a joint aspiration in the diagnosis of pediatric septic arthritis BIB025 , and performances of a sepsis algorithm in an emergency department BIB010 . Unfortunately, the NND or NNM point estimators are not all the time reported with the associated 95% confidence intervals BIB004 BIB002 BIB015 BIB017 BIB025 . e reporting of the clinical utility index (CUI) is more frequently seen in the evaluation of a questionnaire. e grades not the values of CUIs were reported by Michell et al. BIB001 in the assessment of a semistructured diagnostic interview as a diagnostic tool for the major depressive disorder. Johansson et al. BIB006 reported both the CUI + value and its interpretation in cognitive evaluation using Cognistat. e CUI+/CUI− reported by Michell et al. on the patient health questionnaire for depression in primary care (PHQ-9 and PHQ-2) is reported as a value with associated 95% confidence interval as well as interpretation. e CUI+ and CUI− values and associated confidence intervals were also reported by Fereshtehnejad et al. BIB007 in the evaluation of the screening questionnaire for Parkinsonism but just for the significant items. Fereshtehnejad et al. BIB007 also used the values of CUI+ and CUI− to select the optimal screening items whenever the value of point estimator was higher than 0.63. Bartoli et al. BIB011 represented the values of CUI graphically as column bars (not necessarily correct since the CUI is a single value, and a column could induce that is a range of values) in the evaluation of a questionnaire for alcohol use disorder on different subgroups. e accurate reporting of CUIs as values and associated confidence intervals could also be seen in some articles BIB026 BIB008 , but is not a common practice BIB027 BIB018 BIB019 BIB020 BIB021 BIB022 BIB023 . Besides the commercial statistical programs able to assist researchers in conducting an accuracy analysis for a diagnostic test, several free online (Table 12) or offline applications exist (CATmaker [208] and CIcalculator ). Smartphone applications have also been developed to assist in daily clinical practice. e DocNomo application for iPhone/iPad free application allows calculation of posttest probability using the two-step Fagan nomogram. Other available applications are Bayes' posttest probability calculator, EBM Tools app, and EBM Stats Calc. Allen et al. and Power et al. implemented two online tools for the visual examination of the effect of Se, Sp, and prevalence on TP, FP, FN, and TN values and the evaluation
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> This paper designs an object-oriented, continuous-time, full simulation model for addressing a wide range of clinical, procedural, administrative, and financial decisions in health care at a high level of biological, clinical, and administrative detail. The full model has two main parts, which with some simplification can be designated "physiology models" and "models of care processes." The models of care processes, although highly detailed, are mathematically straightforward. However, the mathematics that describes human biology, diseases, and the effects of interventions are more difficult. This paper describes the mathematical formulation and methods for deriving equations, for a variety of different sources of data. Although Archimedes was originally designed for health care applications, the formulation, and equations are general and can be applied to many natural systems. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> This is a review of the Health Utilities Index (HUI®) multi-attribute health-status classification systems, and single- and multi-attribute utility scoring systems. HUI refers to both HUI Mark 2 (HUI2) and HUI Mark 3 (HUI3) instruments. The classification systems provide compact but comprehensive frameworks within which to describe health status. The multi-attribute utility functions provide all the information required to calculate single-summary scores of health-related quality of life (HRQL) for each health state defined by the classification systems. The use of HUI in clinical studies for a wide variety of conditions in a large number of countries is illustrated. HUI provides comprehensive, reliable, responsive and valid measures of health status and HRQL for subjects in clinical studies. Utility scores of overall HRQL for patients are also used in cost-utility and cost-effectiveness analyses. Population norm data are available from numerous large general population surveys. The widespread use of HUI facilitates the interpretation of results and permits comparisons of disease and treatment outcomes, and comparisons of long-term sequelae at the local, national and international levels. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> Correlation coefficients and their associated squared values are examined for the validation of estimates of the activity of biological compounds when a molecular descriptors family is used in the framework of structure-activity relationship (SAR) methods [1]. Starting with the assumption that the measured activity of a biologically active compound is a semiquantitative outcome, we examined Pearson, Spearman, and Kendall’s correlation coefficients. Toxicity descriptors of sixty-seven biologic active compounds were analyzed by applying the molecular descriptors family using SAR modeling. The correlation between the measured toxicity and that estimated by the best performing model was investigated by applying the Pearson, Spearman and Kendall's τa , τb , τc squared correlation coefficient. The results obtained were express as squared correlation coefficients, 95% confidence intervals (CI) of correlation coefficient, Student's t or Z test value, and theirs associated pvalue. They were as follows: Pearson: rPrs 2 = 0.90577, [0.9223, 0.9701], tPrs = 24.99 (p < 0.0001); Spearman: ρSpm 2 = 0.86064, [0.8846, 0.9550], tSpm = 20.03 (p < 0.0001); Kendall's τa: τKen,a 2 = 0.61294, [0.6683, 0.8611], ZKen,τa = 9.37 (p < 0.0001); Kendall's τb: τKen,b 2 = 0.61769, [0.6726, 0.8631], ZKen,τb = 9.37 (p < 0.0001); Kendall's τc: τKen,c 2 = 0.59478, [0.6517, 0.8533], ZKen,τc = 9.23 (p < 0.0001) We remark, that the toxicity of biologically active compounds is a semi-quantitative variable and that its determination may depend on various external factors, e.g. the type of equipment used, the researcher's skills and performance, the type and class of chemicals used. Under those circumstances, a rank correlation coefficient would provide a more reliable estimate of the association than the parametric Pearson coefficient. Our study shows that all five computational methods used to evaluate the squared correlation coefficients resulted in a statistically significant p-value (always less than 0.0001). As expected, lower values of squared correlation coefficients were obtained with Kendall’s methods, and the 95% CI associated with the correlation coefficients overlapped. Looking at the correlation coefficients and their 95% CI calculated with the Pearson and Spearman formulas and how they overlap with the Kendall's τa , τb , τc squared correlation coefficients we suggest that there are no significant differences between them. More research on other classes of biologic active compounds may reveal whether it is appropriate to analyze the activity of molecular descriptors family based on SAR methods using the Pearson correlation coefficient or whether a rank correlation coefficient must be applied <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> Importance Increased use of computed tomography (CT) in pediatrics raises concerns about cancer risk from exposure to ionizing radiation. Objectives To quantify trends in the use of CT in pediatrics and the associated radiation exposure and cancer risk. Design Retrospective observational study. Setting Seven US health care systems. Participants The use of CT was evaluated for children younger than 15 years of age from 1996 to 2010, including 4 857 736 child-years of observation. Radiation doses were calculated for 744 CT scans performed between 2001 and 2011. Main Outcomes and Measures Rates of CT use, organ and effective doses, and projected lifetime attributable risks of cancer. Results The use of CT doubled for children younger than 5 years of age and tripled for children 5 to 14 years of age between 1996 and 2005, remained stable between 2006 and 2007, and then began to decline. Effective doses varied from 0.03 to 69.2 mSv per scan. An effective dose of 20 mSv or higher was delivered by 14% to 25% of abdomen/pelvis scans, 6% to 14% of spine scans, and 3% to 8% of chest scans. Projected lifetime attributable risks of solid cancer were higher for younger patients and girls than for older patients and boys, and they were also higher for patients who underwent CT scans of the abdomen/pelvis or spine than for patients who underwent other types of CT scans. For girls, a radiation-induced solid cancer is projected to result from every 300 to 390 abdomen/pelvis scans, 330 to 480 chest scans, and 270 to 800 spine scans, depending on age. The risk of leukemia was highest from head scans for children younger than 5 years of age at a rate of 1.9 cases per 10 000 CT scans. Nationally, 4 million pediatric CT scans of the head, abdomen/pelvis, chest, or spine performed each year are projected to cause 4870 future cancers. Reducing the highest 25% of doses to the median might prevent 43% of these cancers. Conclusions and Relevance The increased use of CT in pediatrics, combined with the wide variability in radiation doses, has resulted in many children receiving a high-dose examination. Dose-reduction strategies targeted to the highest quartile of doses could dramatically reduce the number of radiation-induced cancers. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> BACKGROUND & AIMS ::: Colorectal cancer (CRC) screening guidelines recommend screening schedules for each single type of test except for concurrent sigmoidoscopy and fecal occult blood test (FOBT). We investigated the cost-effectiveness of a hybrid screening strategy that was based on a fecal immunological test (FIT) and colonoscopy. ::: ::: ::: METHODS ::: We conducted a cost-effectiveness analysis by using the Archimedes Model to evaluate the effects of different CRC screening strategies on health outcomes and costs related to CRC in a population that represents members of Kaiser Permanente Northern California. The Archimedes Model is a large-scale simulation of human physiology, diseases, interventions, and health care systems. The CRC submodel in the Archimedes Model was derived from public databases, published epidemiologic studies, and clinical trials. ::: ::: ::: RESULTS ::: A hybrid screening strategy led to substantial reductions in CRC incidence and mortality, gains in quality-adjusted life years (QALYs), and reductions in costs, comparable with those of the best single-test strategies. Screening by annual FIT of patients 50-65 years old and then a single colonoscopy when they were 66 years old (FIT/COLOx1) reduced CRC incidence by 72% and gained 110 QALYs for every 1000 people during a period of 30 years, compared with no screening. Compared with annual FIT, FIT/COLOx1 gained 1400 QALYs/100,000 persons at an incremental cost of $9700/QALY gained and required 55% fewer FITs. Compared with FIT/COLOx1, colonoscopy at 10-year intervals gained 500 QALYs/100,000 at an incremental cost of $35,100/QALY gained but required 37% more colonoscopies. Over the ranges of parameters examined, the cost-effectiveness of hybrid screening strategies was slightly more sensitive to the adherence rate with colonoscopy than the adherence rate with yearly FIT. Uncertainties associated with estimates of FIT performance within a program setting and sensitivities for flat and right-sided lesions are expected to have significant impacts on the cost-effectiveness results. ::: ::: ::: CONCLUSIONS ::: In our simulation model, a strategy of annual or biennial FIT, beginning when patients are 50 years old, with a single colonoscopy when they are 66 years old, delivers clinical and economic outcomes similar to those of CRC screening by single-modality strategies, with a favorable impact on resources demand. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> BACKGROUND AND AIMS ::: Suspected latent tuberculosis infection (LTBI) is a common reason for referral to TB clinics. Interferon-gamma release assays (IGRAs) are more specific than tuberculin skin tests (TSTs) for diagnosing LTBI. The aim of this study is to determine if IGRA changes practice in the management of cases referred to a TB clinic for possible LTBI. ::: ::: ::: DESIGN AND METHODS ::: A prospective study was performed over 29 months. All adult patients who had TST, CXR & IGRA were included. The original decision regarding TB chemoprophylaxis was made by TB team consensus, based on clinical history and TST. Cases were then analysed with the addition of IGRA to determine if this had altered management. An independent physician subsequently reviewed the cases. ::: ::: ::: RESULTS ::: Of 204 patients studied, 68 were immunocompromised. 120 patients had positive TSTs. Of these, 36 (30%) had a positive QFT and 84 (70%) had a negative QFT. Practice changed in 78 (65%) cases with positive TST, all avoiding TB chemoprophylaxis due to QFT. Of the immunocompromised patients, 17 (25%) underwent change of practice. No cases of active TB have developed. ::: ::: ::: CONCLUSION ::: This study demonstrates a significant change of clinical practice due to IGRA use. Our findings support the NICE 2011 recommendations. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> BACKGROUND: Early diagnosis of acute myocardial infarction (AMI) can ensure quick and effective treatment but only 20% of adults with emergency admissions for chest pain have an AMI. High-sensitivity cardiac troponin (hs-cTn) assays may allow rapid rule-out of AMI and avoidance of unnecessary hospital admissions and anxiety. OBJECTIVE: To assess the clinical effectiveness and cost-effectiveness of hs-cTn assays for the early (within 4 hours of presentation) rule-out of AMI in adults with acute chest pain. METHODS: Sixteen databases, including MEDLINE and EMBASE, research registers and conference proceedings, were searched to October 2013. Study quality was assessed using QUADAS-2. The bivariate model was used to estimate summary sensitivity and specificity for meta-analyses involving four or more studies, otherwise random-effects logistic regression was used. The health-economic analysis considered the long-term costs and quality-adjusted life-years (QALYs) associated with different troponin (Tn) testing methods. The de novo model consisted of a decision tree and Markov model. A lifetime time horizon (60 years) was used. RESULTS: Eighteen studies were included in the clinical effectiveness review. The optimum strategy, based on the Roche assay, used a limit of blank (LoB) threshold in a presentation sample to rule out AMI [negative likelihood ratio (LR-) 0.10, 95% confidence interval (CI) 0.05 to 0.18]. Patients testing positive could then have a further test at 2 hours; a result above the 99th centile on either sample and a delta (Δ) of ≥ 20% has some potential for ruling in an AMI [positive likelihood ratio (LR+) 8.42, 95% CI 6.11 to 11.60], whereas a result below the 99th centile on both samples and a Δ of < 20% can be used to rule out an AMI (LR- 0.04, 95% CI 0.02 to 0.10). The optimum strategy, based on the Abbott assay, used a limit of detection (LoD) threshold in a presentation sample to rule out AMI (LR- 0.01, 95% CI 0.00 to 0.08). Patients testing positive could then have a further test at 3 hours; a result above the 99th centile on this sample has some potential for ruling in an AMI (LR+ 10.16, 95% CI 8.38 to 12.31), whereas a result below the 99th centile can be used to rule out an AMI (LR- 0.02, 95% CI 0.01 to 0.05). In the base-case analysis, standard Tn testing was both most effective and most costly. Strategies considered cost-effective depending upon incremental cost-effectiveness ratio thresholds were Abbott 99th centile (thresholds of < £6597), Beckman 99th centile (thresholds between £6597 and £30,042), Abbott optimal strategy (LoD threshold at presentation, followed by 99th centile threshold at 3 hours) (thresholds between £30,042 and £103,194) and the standard Tn test (thresholds over £103,194). The Roche 99th centile and the Roche optimal strategy [LoB threshold at presentation followed by 99th centile threshold and/or Δ20% (compared with presentation test) at 1-3 hours] were extendedly dominated in this analysis. CONCLUSIONS: There is some evidence to suggest that hs-CTn testing may provide an effective and cost-effective approach to early rule-out of AMI. Further research is needed to clarify optimal diagnostic thresholds and testing strategies. STUDY REGISTRATION: This study is registered as PROSPERO CRD42013005939. FUNDING: The National Institute for Health Research Health Technology Assessment programme. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> Comparing diagnostic tests on accuracy alone can be inconclusive. For example, a test may have better sensitivity than another test yet worse specificity. Comparing tests on benefit risk may be more conclusive because clinical consequences of diagnostic error are considered. For benefit-risk evaluation, we propose diagnostic yield, the expected distribution of subjects with true positive, false positive, true negative, and false negative test results in a hypothetical population. We construct a table of diagnostic yield that includes the number of false positive subjects experiencing adverse consequences from unnecessary work-up. We then develop a decision theory for evaluating tests. The theory provides additional interpretation to quantities in the diagnostic yield table. It also indicates that the expected utility of a test relative to a perfect test is a weighted accuracy measure, the average of sensitivity and specificity weighted for prevalence and relative importance of false positive and false negative testing errors, also interpretable as the cost-benefit ratio of treating non-diseased and diseased subjects. We propose plots of diagnostic yield, weighted accuracy, and relative net benefit of tests as functions of prevalence or cost-benefit ratio. Concepts are illustrated with hypothetical screening tests for colorectal cancer with test positive subjects being referred to colonoscopy. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Cost-Benefit Analysis <s> Many decisions in medicine involve trade-offs, such as between diagnosing patients with disease versus unnecessary additional testing for those who are healthy. Net benefit is an increasingly reported decision analytic measure that puts benefits and harms on the same scale. This is achieved by specifying an exchange rate, a clinical judgment of the relative value of benefits (such as detecting a cancer) and harms (such as unnecessary biopsy) associated with models, markers, and tests. The exchange rate can be derived by asking simple questions, such as the maximum number of patients a doctor would recommend for biopsy to find one cancer. As the answers to these sorts of questions are subjective, it is possible to plot net benefit for a range of reasonable exchange rates in a “decision curve.” For clinical prediction models, the exchange rate is related to the probability threshold to determine whether a patient is classified as being positive or negative for a disease. Net benefit is useful for determining whether basing clinical decisions on a model, marker, or test would do more good than harm. This is in contrast to traditional measures such as sensitivity, specificity, or area under the curve, which are statistical abstractions not directly informative about clinical value. Recent years have seen an increase in practical applications of net benefit analysis to research data. This is a welcome development, since decision analytic techniques are of particular value when the purpose of a model, marker, or test is to help doctors make better clinical decisions. <s> BIB009
e studies conducted in phase III and IV in the investigation of a diagnostic test could be covered under the generic name of cost-benefit analysis. Different aspects of the benefit could be investigated such as societal impact (the impact on the society), cost-effectiveness (affordability), clinical efficacy or effectiveness (effects on the outcome), cost-consequence analysis, cost-utility analysis, sensitivity analysis (probability of disease and/or recurrence, cost of tests, impact on QALY (quality-adjusted life-year), and impact of treatment), and analytical performances (precision, linearity, and cost-effectiveness ratio) . us, the evaluation of diagnostic tests benefits could be investigated from different perspectives (e.g., societal, health-care system, and health-care provider) and considering different items (e.g., productivity, patient and family time, medication, and physician time) . Furthermore, an accurate comparison of two diagnostic tests must consider both the accuracy and benefit/harm in the assessment of the clinical utility BIB008 BIB009 . Generally, then cost-benefit analysis employs multivariate and multifactorial analysis using different designs of the experiment, including survival analysis, and the statistical approach is selected according to the aim of the study. Analysis of relationships is done using correlation method (Person's correlation (r) when the variables (two) are quantitative and normal distributed, and a linear relation is assuming between them; Spearman's (ρ) or Kendall's (τ) correlation coefficient otherwise; it is recommended to use Kendall's tau instead of Spearman's rho when data have ties BIB003 ) or regression analysis when the nature of the relationship is of interest and an outcome ), while the dose exceeding 20 mSv was reported as percentages. e mean organ dose was also reported and the lifetime attributable risk of solid cancer or leukemia, as well as some CT scans leading to one case of cancer per 10,000 scans BIB004 . e reported numbers and risks were not accompanied by the 95% confidence intervals BIB004 excepting the estimated value of the total number of future radiation-induced cancers related to pediatric CT use (they named it as uncertainty limit). Dinh et al. BIB005 evaluated the effectiveness of a combined screening test (fecal immunological test and colonoscopy) for colorectal cancer using the Archimedes model (human physiology, diseases, interventions, and health-care systems BIB001 ). e reported results, besides frequently used descriptive metrics, are the health utility score BIB002 , cost per person, quality-adjusted life-years (QALYs) gained per person, and cost/QALYs gain as numerical point estimators not accompanied by the 95% confidence interval. Westwood et al. BIB007 conducted a secondary study to evaluate the performances of the high-sensitivity cardiac troponin (hs-cTn) assays in ruling-out the patients with acute myocardial infarction (AMI). Clinical effectiveness using metrics such as Se, Sp, NLR, and PLR (for both any threshold and 99 th percentile threshold) was reported with associated 95% confidence intervals. As the costeffectiveness metrics the long-term costs, cost per life-year (LY) gained, quality-adjusted life-years (QALYs), and costs/ QALYs were reported with associated 95% confidence intervals for different Tn testing methods. Furthermore, the incremental cost-effectiveness ratio (ICER) was used to compare the mean costs of two Tn testing methods along with the multivariate analysis (reported as estimates, standard error of the estimate, and the distribution of data). Tiernan et al. BIB006 reported the changes in the clinical practice for the diagnosis of latent tuberculosis infection (LTBI) with interferon-gamma release assay, namely, QuantiFERON-TB Gold In-Tube (QFT, Cellestis, Australia). Unfortunately, the reported outcome was limited to the number of changes in practice due to QFT as absolute frequency and percentages BIB006 .
Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Diagnostic tests, like therapeutic procedures, require proper analysis prior to incorporation into clinical practice. In studying diagnostic tests, an evaluation should be made of the reproducibility, accuracy, variation among those without the disease, and variation among those with the disease. Both diseased and disease-free states should be identified using a gold standard, if available. Three main guidelines can be used in evaluating and applying the results of diagnostic tests: validity of the study, expression of the results, and assessment of the generalizability of the results. Validity requires an independent, blind comparison with a reference standard. Methodology should be fully explained. Results should include sensitivity, specificity, and a receiver operating characteristic plot. Several categories of results should be provided in the form of likelihood ratios. Management decisions can be made on the basis of the posttest probability of disease after including both the pretest probability and the likelihood ratio in the calculation. <s> BIB001 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> CONTEXT ::: The literature contains a large number of potential biases in the evaluation of diagnostic tests. Strict application of appropriate methodological criteria would invalidate the clinical application of most study results. ::: ::: ::: OBJECTIVE ::: To empirically determine the quantitative effect of study design shortcomings on estimates of diagnostic accuracy. ::: ::: ::: DESIGN AND SETTING ::: Observational study of the methodological features of 184 original studies evaluating 218 diagnostic tests. Meta-analyses on diagnostic tests were identified through a systematic search of the literature using MEDLINE, EMBASE, and DARE databases and the Cochrane Library (1996-1997). Associations between study characteristics and estimates of diagnostic accuracy were evaluated with a regression model. ::: ::: ::: MAIN OUTCOME MEASURES ::: Relative diagnostic odds ratio (RDOR), which compared the diagnostic odds ratios of studies of a given test that lacked a particular methodological feature with those without the corresponding shortcomings in design. ::: ::: ::: RESULTS ::: Fifteen (6.8%) of 218 evaluations met all 8 criteria; 64 (30%) met 6 or more. Studies evaluating tests in a diseased population and a separate control group overestimated the diagnostic performance compared with studies that used a clinical population (RDOR, 3.0; 95% confidence interval [CI], 2.0-4.5). Studies in which different reference tests were used for positive and negative results of the test under study overestimated the diagnostic performance compared with studies using a single reference test for all patients (RDOR, 2.2; 95% CI, 1.5-3.3). Diagnostic performance was also overestimated when the reference test was interpreted with knowledge of the test result (RDOR, 1.3; 95% CI, 1.0-1.9), when no criteria for the test were described (RDOR, 1.7; 95% CI, 1.1-2.5), and when no description of the population under study was provided (RDOR, 1.4; 95% CI, 1.1-1.7). ::: ::: ::: CONCLUSION ::: These data provide empirical evidence that diagnostic studies with methodological shortcomings may overestimate the accuracy of a diagnostic test, particularly those including nonrepresentative patients or applying different reference standards. <s> BIB002 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> BACKGROUND ::: In the era of evidence based medicine, with systematic reviews as its cornerstone, adequate quality assessment tools should be available. There is currently a lack of a systematically developed and evaluated tool for the assessment of diagnostic accuracy studies. The aim of this project was to combine empirical evidence and expert opinion in a formal consensus method to develop a tool to be used in systematic reviews to assess the quality of primary studies of diagnostic accuracy. ::: ::: ::: METHODS ::: We conducted a Delphi procedure to develop the quality assessment tool by refining an initial list of items. Members of the Delphi panel were experts in the area of diagnostic research. The results of three previously conducted reviews of the diagnostic literature were used to generate a list of potential items for inclusion in the tool and to provide an evidence base upon which to develop the tool. ::: ::: ::: RESULTS ::: A total of nine experts in the field of diagnostics took part in the Delphi procedure. The Delphi procedure consisted of four rounds, after which agreement was reached on the items to be included in the tool which we have called QUADAS. The initial list of 28 items was reduced to fourteen items in the final tool. Items included covered patient spectrum, reference standard, disease progression bias, verification bias, review bias, clinical review bias, incorporation bias, test execution, study withdrawals, and indeterminate results. The QUADAS tool is presented together with guidelines for scoring each of the items included in the tool. ::: ::: ::: CONCLUSIONS ::: This project has produced an evidence based quality assessment tool to be used in systematic reviews of diagnostic accuracy studies. Further work to determine the usability and validity of the tool continues. <s> BIB003 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> OBJECTIVES ::: To produce an easily understood and accessible tool for use by researchers in diagnostic studies. Diagnostic studies should have sample size calculations performed, but in practice, they are performed infrequently. This may be due to a reluctance on the part of researchers to use mathematical formulae. ::: ::: ::: METHODS ::: Using a spreadsheet, we derived nomograms for calculating the number of patients required to determine the precision of a test's sensitivity or specificity. ::: ::: ::: RESULTS ::: The nomograms could be easily used to determine the sensitivity and specificity of a test. ::: ::: ::: CONCLUSIONS ::: In addition to being easy to use, the nomogram allows deduction of a missing parameter (number of patients, confidence intervals, prevalence, or sensitivity/specificity) if the other three are known. The nomogram can also be used retrospectively by the reader of published research as a rough estimating tool for sample size calculations. <s> BIB004 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Now is an exciting time to be or become a diagnostician. More diagnostic tests, including portions of the medical interview and physical examination, are being studied rigorously for their accuracy, precision, and usefulness in practice,1,2 and this research is increasingly being systematically reviewed and synthesized.3,4 Diagnosticians are gaining increasing access to this research evidence, raising hope that this knowledge will inform their diagnostic decisions and improve their patients’ clinical outcomes.5 For patients to benefit fully from this accumulating knowledge, the diagnosticians serving them must be able to reason probabilistically, to understand how test results can revise disease probability to confirm or exclude disorders, and to integrate this reasoning with other types of knowledge and diagnostic thinking.6–8 ::: ::: Yet, clinicians encounter several barriers when trying to integrate research evidence into clinical diagnosis.9 Some barriers involve difficulties in understanding and using the quantitative measures of tests’ accuracy and discriminatory power, including sensitivity, specificity, and likelihood ratios (LRs).9,10 We have noticed that LRs are particularly troubling to many learners at first, and we have wondered if this is because of the way they have been taught. Stumbling blocks can arise in several places when learning LRs: the names and formulae themselves can be intimidating; the arithmetic functions can be mystifying when attempted all at once; if two levels of test results are taught first, learners can have difficulty ‘stretching’ to multiple levels; and if disease probability is framed in odds terms (to directly multiply the odds by the likelihood ratio), learners can misunderstand why and how this conversion is done. Other stumbling blocks may occur as well. ::: ::: Other authors have described various approaches to helping clinicians understand LRs.11–16 In this article, we describe two additional approaches to help clinical learners understand how LRs describe the discriminatory power of test results. Whereas we mention other concepts such as pretest and posttest probability, full treatment of those subjects is beyond the scope of this article. These approaches were developed by experienced teachers of evidence-based medicine (EBM) and were refined over years of teaching practice. These tips have also been field-tested to double-check the clarity and practicality of these descriptions, as explained in the introductory article of this series.17 ::: ::: To help the reader envision these teaching approaches, we present sequenced advice for teachers in plain text, coupled with sample words to speak, in italics. These scripts are meant to be interactive, which means that teachers should periodically check in with the learners for their understanding and that teachers should try other ways to explain the ideas if the words we have suggested do not “click.” We present them in order from shorter to longer; however, because these 2 scripts cover the same general content, we encourage teachers to use either or both in an order that best fits their setting and learners. <s> BIB005 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Background ::: Clinical prediction rules (CPR) are tools that clinicians can use to predict the most likely diagnosis, prognosis, or response to treatment in a patient based on individual characteristics. CPRs attempt to standardize, simplify, and increase the accuracy of clinicians’ diagnostic and prognostic assessments. The teaching tips series is designed to give teachers advice and materials they can use to attain specific educational objectives. <s> BIB006 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> The leading function of the physician is the clinical reasoning, which involves appropriate investigation of the problems of the patient, formulation of a diagnostic suspect based on the patient's symptoms and signs, gathering of additional relevant information, to select necessary tests and administration of the most suitable therapy. The problems of the patient are expressed by symptoms or signs or abnormal test results, requested for a variety of reasons. The entire scientific, as well as diagnostic approach, is based on three steps: to stumble in a problem; to try a solution through a hypothesis; to disprove or to prove the hypothesis by a process of criticism. Clinicians use the information obtained from the history and physical examination to estimate initial (or pre-test) probability and then use the results from tests and other diagnostic procedures to modify this probability until the post-test probability is such that the suspected diagnosis is either confirmed or ruled out. When the pre-test probability of disease is high, tests characterized by high specificity will be preferred, in order to confirm the diagnostic suspect. When the pre-test probability of disease is low, a test with high sensitivity is advisable to exclude the hypothetical disease. The above mentioned process of decision making has been transferred to a problem oriented medical record that is currently employed in our Clinic. <s> BIB007 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> In the last decade, many new rapid diagnostic tests for infectious diseases have been developed. In general, these new tests are developed with the intent to optimize feasibility and population health, not accuracy alone. However, unlike drugs or vaccines, diagnostic tests are evaluated and licensed on the basis of accuracy, not health impact (eg, reduced morbidity or mortality). Thus, these tests are sometimes recommended or scaled up for purposes of improving population health without randomized evidence that they do so. We highlight the importance of randomized trials to evaluate the health impact of novel diagnostics and note that such trials raise distinctive ethical challenges of equipoise, equity, and informed consent. We discuss the distinction between equipoise for patient-important outcomes versus diagnostic accuracy, the equity implications of evaluating health impact of diagnostics under routine conditions, and the importance of offering reasonable choices for informed consent in diagnostic trials. <s> BIB008 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Much of clinical research is aimed at assessing causality. However, clinical research can also address the value of new medical tests, which will ultimately be used for screening for risk factors, to diagnose a disease, or to assess prognosis. In order to be able to construct research questions and designs involving these concepts, one must have a working knowledge of this field. In other words, although traditional clinical research designs can be used to assess some of these questions, most of the studies assessing the value of diagnostic testing are more akin to descriptive observational designs, but with the twist that these designs are not aimed to assess causality, but are rather aimed at determining whether a diagnostic test will be useful in clinical practice. This chapter will introduce the various ways of assessing the accuracy of diagnostic tests, which will include discussions of sensitivity, specificity, predictive value, likelihood ratio, and receiver operator characteristic curves. <s> BIB009 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Graphical abstractDisplay Omitted Sample size calculation in diagnostic studies.Tables of required sample size in different scenarios.How sample size varies with accuracy index and effect size.Help to the clinician when designing ROC diagnostic studies. ObjectivesThis review provided a conceptual framework of sample size calculations in the studies of diagnostic test accuracy in various conditions and test outcomes. MethodsThe formulae of sample size calculations for estimation of adequate sensitivity/specificity, likelihood ratio and AUC as an overall index of accuracy and also for testing in single modality and comparing two diagnostic tasks have been presented for desired confidence interval. ResultsThe required sample sizes were calculated and tabulated with different levels of accuracies and marginal errors with 95% confidence level for estimating and for various effect sizes with 80% power for purpose of testing as well. The results show how sample size is varied with accuracy index and effect size of interest. ConclusionThis would help the clinicians when designing diagnostic test studies that an adequate sample size is chosen based on statistical principles in order to guarantee the reliability of study. <s> BIB010 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> textabstractDiscrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions. An important question when setting up a DCE is the size of the sample needed to answer the research question of interest. Although theory exists as to the calculation of sample size requirements for stated choice data, it does not address the issue of minimum sample size requirements in terms of the statistical power of hypothesis tests on the estimated coefficients. The purpose of this paper is threefold: (1) to provide insight into whether and how researchers have dealt with sample size calculations for healthcare-related DCE studies; (2) to introduce and explain the required sample size for parameter estimates in DCEs; and (3) to provide a step-by-step guide for the calculation of the minimum sample size requirements for DCEs in health care. <s> BIB011 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. <s> BIB012 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> The health care system needs to face new and advanced medical technologies that can improve the patients' quality of life by replacing lost or decreased functions. In stroke patients, the disabilities that follow cerebral lesions may impair the mandatory daily activities of an independent life. These activities are dependent mostly on the patient's upper limb function so that they can carry out most of the common activities associated with a normal life. Therefore, an upper limb exoskeleton device for stroke patients can contribute a real improvement of quality of their life. The ethical problems that need to be considered are linked to the correct adjustment of the upper limb skills in order to satisfy the patient's expectations, but within physiological limits. The debate regarding the medical devices dedicated to neurorehabilitation is focused on their ability to be beneficial to the patient's life, keeping away damages, injustice, and risks. <s> BIB013 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Wearable health monitoring systems have gained considerable interest in recent years owing to their tremendous promise for personal portable health watching and remote medical practices. The sensors with excellent flexibility and stretchability are crucial components that can provide health monitoring systems with the capability of continuously tracking physiological signals of human body without conspicuous uncomfortableness and invasiveness. The signals acquired by these sensors, such as body motion, heart rate, breath, skin temperature and metabolism parameter, are closely associated with personal health conditions. This review attempts to summarize the recent progress in flexible and stretchable sensors, concerning the detected health indicators, sensing mechanisms, functional materials, fabrication strategies, basic and desired features. The potential challenges and future perspectives of wearable health monitoring system are also briefly discussed. <s> BIB014 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> This Editorial comment refers to the article “Medical students’ attitude towards artificial intelligence: a multicenter survey,” Pinto Dos Santos D, et al Eur Radiol 2018. • Medical students are not well informed of the potential consequences of AI in radiology. ::: • The fundamental principles of AI—as well as its application in medicine—must be taught in medical schools. ::: • The radiologist specialty must actively reflect on how to validate, approve, and integrate AI algorithms into our clinical practices. <s> BIB015 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> OBJECTIVE ::: To assist clinicians to make adequate interpretation of scientific evidence from studies that evaluate diagnostic tests in order to allow their rational use in clinical practice. ::: ::: ::: METHODS ::: This is a narrative review focused on the main concepts, study designs, the adequate interpretation of the diagnostic accuracy data, and making inferences about the impact of diagnostic testing in clinical practice. ::: ::: ::: RESULTS ::: Most of the literature that evaluates the performance of diagnostic tests uses cross-sectional design. Randomized clinical trials, in which diagnostic strategies are compared, are scarce. Cross-sectional studies measure diagnostic accuracy outcomes that are considered indirect and insufficient to define the real benefit for patients. Among the accuracy outcomes, the positive and negative likelihood ratios are the most useful for clinical management. Variations in the study's cross-sectional design, which may add bias to the results, as well as other domains that contribute to decreasing the reliability of the findings, are discussed, as well as how to extrapolate such accuracy findings on impact and consequences considered important for the patient. Aspects of costs, time to obtain results, patients' preferences and values should preferably be considered in decision making. ::: ::: ::: CONCLUSION ::: Knowing the methodology of diagnostic accuracy studies is fundamental, but not sufficient, for the rational use of diagnostic tests. There is a need to balance the desirable and undesirable consequences of tests results for the patients in order to favor a rational decision-making approach about which tests should be recommended in clinical practice. <s> BIB016 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Abstract The pathological diagnostics of cancer - based on the histological features - is today increasingly completed by molecular profiling at variable depth in an almost evident fashion. Predictive information should cover potential therapeutic targets and/or major resistance mechanisms the nature of which is subject of alteration during the course of the treatment. Mutational profiling recently became technically available by the analysis of circulating free DNA obtained following non-invasive peripheral blood or body fluid sampling. This „liquid biopsy” approach reflects the general status considering the actual tumor burden, irrespective of the intratumoral distribution and anatomical site. However, the dynamics of the liquid compartment relies on tissue-related processes reflected by histological variables. The amount and composition of free DNA seems to be influenced by many factors, including the stage and anatomical localization of the cancer, the relative mass of neoplastic subclones, the growth rate, the stromal and inflammatory component, the extent of tumor cell death and necrosis. The histopathological context should be considered also when analysis of cfDNA is about to replace repeated tumor sampling for molecular follow-up. <s> BIB017 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Noble metal nanoparticle-based colorimetric sensors have become powerful tools for the detection of different targets with convenient readout. Among the many types of nanomaterials, noble metal nanoparticles exhibit extraordinary optical responses mainly due to their excellent localized surface plasmon resonance (LSPR) properties. The absorption spectrum of the noble metal nanoparticles was mostly in the visible range. This property enables the visual detection of various analytes with the naked eye. Among numerous color change modes, the way that different concentrations of targets represent vivid color changes has been brought to the forefront because the color distinction capability of normal human eyes is usually better than the intensity change capability. We review the state of the art in noble metal nanoparticle-based multicolor colorimetric strategies adopted for visual quantification by the naked eye. These multicolor strategies based on different means of morphology transformation are classified... <s> BIB018 </s> Medical Diagnostic Tests: A Review of Test Anatomy, Phases, and Statistical Treatment of Data <s> Limitations and Perspectives <s> Abstract Upconversion nanoparticle-based lateral flow assays (UCNP-LFAs) have attracted significant attention in point-of-care testing (POCT) applications, due to the long-term photostability and enhanced signal-to-background noise ratio. The existing UCNP-LFAs generally require peripheral equipment for exciting fluorescent signals and reading out fluorescence results, which are generally bulky and expensive. Herein, we developed a miniaturized and portable UCNP-LFA platform, which is composed of a LFA detection system, an UCNP-LFA reader and a smartphone-assisted UCNP-LFA analyzer. The LFA detection system is based on three types of UCNPs for multiplexed detection. The reader has a dimension of 24.0 cm × 9.4 cm × 5.4 cm (L × W × H) and weight of 0.9 kg. The analyzer based on the custom-designed software of a smartphone (termed as UCNP-LFA analyzer) can get the quantitative analysis results in a real-time manner. We demonstrated the universality of this platform by highly sensitive and quantitative detections of several kinds of targets, including small molecule (ochratoxin A, OTA), heavy metal ion (Hg2+), bacteria (salmonella, SE), nucleic acid (hepatitis B virus, HBV) and protein (growth stimulation expressed gene 2, ST-2). Our developed UCNP-LFA platform holds great promise for applications in disease diagnostics, environmental pollution monitoring and food safety at the point of care. <s> BIB019
e current paper did not present either detail regarding the research methodology for diagnostic studies nor the critical appraisal of a paper presenting the performances of a diagnostic test because these are beyond the aim. Extensive scientific literature exists regarding both the design of experiments for diagnostic studies [4, BIB003 BIB001 BIB009 and the critical evaluation of a diagnostic paper BIB002 [232] BIB006 BIB005 . As a consequence, neither the effect of the sample size on the accuracy parameters, or the a priori computation of the sample size needed to reach the level of significance for a specific research question, nor the a posteriori calculation of the power of the diagnostic test is discussed. e scientific literature presenting the sample size calculation for diagnostic studies is presented in the scientific literature BIB004 BIB011 BIB012 BIB010 , but these approaches must be used with caution because the calculations are sensitive and the input data from one population are not a reliable solution for another population, so the input data for sample size calculation are recommended to come from a pilot study. is paper does not treat how to select a diagnostic test in clinical practice, the topic being treated by the evidence-based medicine and clinical decision BIB007 BIB016 . Health-care practice is a dynamic field and records rapid changes due to changes in the evolution of known diseases, the apparition of new pathologies, the life expectancy of the population, progress in information theory, communication and computer sciences, development of new materials, and approaches as solutions for medical problems. e concept of personalized medicine changes the way of health care, the patient becomes the core of the decisional process, and the applied diagnostic methods and/or treatment closely fit the needs and particularities of the patient . Different diagnostic or monitoring devices such as wearable health monitoring systems BIB014 , liquid biopsy or associated approaches BIB017 BIB018 , wireless ultrasound transducer , or other point-of-care testing (POCT) methods BIB019 are introduced and need proper analysis and validation. Furthermore, the availability of big data opens a new pathway in analyzing medical data, and artificial intelligence approaches will probably change the way of imaging diagnostic and monitoring BIB015 . e ethical aspects must be considered BIB008 BIB013 along with valid and reliable methods for the assessment of old and new diagnostic approaches that are required. Space for methodological improvements exists, from designing the experiments to analyzing of the experimental data for both observational and interventional approaches.
A review of speech-based bimodal recognition <s> A. Motivation for Bimodal Recognition <s> Oral speech intelligibility tests were conducted with, and without, supplementary visual observation of the speaker's facial and lip movements. The difference between these two conditions was examined as a function of the speech‐to‐noise ratio and of the size of the vocabulary under test. The visual contribution to oral speech intelligibility (relative to its possible contribution) is, to a first approximation, independent of the speech‐to‐noise ratio under test. However, since there is a much greater opportunity for the visual contribution at low speech‐to‐noise ratios, its absolute contribution can be exploited most profitably under these conditions. <s> BIB001 </s> A review of speech-based bimodal recognition <s> A. Motivation for Bimodal Recognition <s> This paper reviews progress in understanding the psychology of lipreading and audio-visual speech perception. It considers four questions. What distinguishes better from poorer lipreaders? What are the effects of introducing a delay between the acoustical and optical speech signals? What have attempts to produce computer animations of talking faces contributed to our understanding of the visual cues that distinguish consonants and vowels? Finally, how should the process of audio-visual integration in speech perception be described; that is, how are the sights and sounds of talking faces represented at their conflux? <s> BIB002
Speech recognition can be used wherever speech-based man-machine communication is appropriate. Speaker recognition has potential application wherever the identity of a person needs to be determined (identification task) or an identity claim needs to be validated (identity verification task). Possible applications of bimodal recognition are: speech transcription; adaptive human-computer interfaces in multimedia computer environments; voice control of office or entertainment equipment; and access control for buildings, computer resources, or information sources. Bimodal recognition tries to emulate the multimodality of human perception. It is known that all sighted people rely, to a varying extent, on lipreading to enhance speech perception or to compensate for the deficiencies of audition BIB002 . Lipreading is particularly beneficial when the listener suffers from impaired hearing or when the acoustic signal is degraded BIB001 , . Sensitivity to speech variability, inadequate recognition accuracy for many potential applications, and susceptibility to impersonation are among the main technical hurdles preventing a widespread adoption of speech-based recognition systems. The rationale for bimodal recognition is to improve recognition performance in terms of accuracy and robustness against speech variability and impersonation. Compared to speech or speaker recognition that uses only one primary source, recognition based on information extracted from two primary sources can be made more robust to impersonation and to speech variability, which has a different effect on each modality. Automatic person recognition based on still two-dimensional (2-D) facial images is vulnerable to impersonation attempts using photographs or by professional mimics wearing appropriate disguise. In contrast to static personal characteristics, dynamic characteristics such as visual speech are difficult to mimic or reproduce artificially. Hence, dynamic characteristics offer a higher potential for protection against impersonation than static characteristics. Given the potential gains promised by the combination of modalities, multimodal systems have been identified, by many experts in spoken language systems , as a key area which requires basic research in order to catalyze a widespread deployment of spoken language systems in the "real world."
A review of speech-based bimodal recognition <s> B. Outline of the Review <s> Acoustic automatic speech recognition (ASR) systems tend to perform poorly with noisy speech. Unfortunately, most application environments contain noise from machines, vehicles, others talking, typing, television, sound systems, etc. In addition, system performance is highly dependent on the particular microphone type and its placement, but most people find head-mounted microphones uncomfortable for extended use and they are impractical in many situations. Fortunately, the use of visual speech (lipreading or, more properly, speechreading) information has been shown to improve the performance of acoustic ASR systems especially in noise. This paper outlines the history of automatic lipreading research and describes the authors current efforts. <s> BIB001 </s> A review of speech-based bimodal recognition <s> B. Outline of the Review <s> We give an overview of speechreading systems from the perspective of the face and gesture recognition community, paying particular attention to approaches to key design decisions and the benefits and drawbacks. We discuss the central issue of sensory integration how much processing of the acoustic and the visual information should go on before integration how should it be integrated. We describe several possible practical applications, and conclude with a list of important outstanding problems that seem amenable to attack using techniques developed in the face and gesture recognition community. <s> BIB002 </s> A review of speech-based bimodal recognition <s> B. Outline of the Review <s> This paper reviews key attributes of neural processing essential to intelligent multimedia processing (IMP). The objective is to show why neural networks (NNs) are a core technology for the following multimedia functionalities: (1) efficient representations for audio/visual information, (2) detection and classification techniques, (3) fusion of multimodal signals, and (4) multimodal conversion and synchronization. It also demonstrates how the adaptive NN technology presents a unified solution to a broad spectrum of multimedia applications. As substantiating evidence, representative examples where NNs are successfully applied to IMP applications are highlighted. The examples cover a broad range, including image visualization, tracking of moving objects, image/video segmentation, texture classification, face-object detection/recognition, audio classification, multimodal recognition, and multimodal lip reading. <s> BIB003 </s> A review of speech-based bimodal recognition <s> B. Outline of the Review <s> We review recent research that examines audio-visual integration in multimodal communication. The topics include bimodality in human speech, human and automated lip reading, facial animation, lip synchronization, joint audio-video coding, and bimodal speaker verification. We also study the enabling technologies for these research topics, including automatic facial-feature tracking and audio-to-visual mapping. Recent progress in audio-visual research shows that joint processing of audio and video provides advantages that are not available when the audio and video are processed independently. <s> BIB004
This review complements earlier surveys on related themes BIB004 , BIB003 , BIB001 , BIB002 . The history of automatic lipreading research is outlined in BIB001 , which does not cover audio processing, sensor fusion, or bimodal speaker recognition. The In practice, similar speech processing techniques are used for speech recognition and speaker recognition. Front-end processing converts raw speech into a high-level representation, which ideally retains only essential information for pattern categorization. The latter is performed by a classifier, which often consists of models of pattern distribution, coupled to a decision procedure. The block generically labeled "constraints" typically represents domain knowledge, such as syntactic or semantic knowledge, which may be applied during the recognition. Sequential or tree configurations of modality-specific classifiers are possible alternatives to the decision fusion of parallel classifiers shown in (b). Audio-visual fusion can also occur at a level between feature and decision levels. overview given in BIB002 covers speechreading (it pays particular attention to visual speech processing), but it does not cover audio processing and bimodal speaker recognition. Reference BIB003 centers on the main attributes of neural networks as a core technology for multimedia applications which require automatic extraction, recognition, interpretation, and interactions of multimedia signals. Reference BIB004 covers the wider topic of audio-visual integration in multimodal communication encompassing recognition, synthesis, and compression. This paper focuses on bimodal speech and speaker recognition. Given the multidisciplinary nature of bimodal recognition, the review is broad-based. It is intended to act as a shop-window for techniques that can be used in bimodal recognition. However, paper length restrictions preclude an exhaustive coverage of the field. Fig. 1 shows a simplified architecture commonly used for bimodal recognition. The structure of the review is a direct mapping from the building blocks of this architecture. The paper is organized as follows. First, the processing techniques are discussed. Thereafter, bimodal recognition performance is reviewed. Sample applications and avenues for further work are then suggested. Finally, concluding remarks are given.
A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> The results of a study aimed at finding the importance of pitch for automatic speaker recognition are presented. Pitch contours were obtained for 60 utterances, each approximately 2‐sec in duration, of 10 female speakers. A data‐reduction procedure based on the Karhunen‐Loeve representation was found effective in representing the pitch information in each contour in a 20‐dimensional space. The data were divided into two portions; one part was used to design the speaker recognition system, while the other part was used to test the effectiveness of the design. The 20‐dimensional vectors representing the pitch contours of the design set were linearly transformed so that the ratio of interspeaker to intraspeaker variance in the transformed space was maximum. A reference utterance was formed for each speaker by averaging the transformed vectors of that speaker. The test utterance was assigned to the speaker corresponding to the reference utterance with the smallest Euclidean distance in the transformed space. ... <s> BIB001 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> An important problem in speech processing is to detect the presence of speech in a background of noise. This problem is often referred to as the endpoint location problem. By accurately detecting the beginning and end of an utterance, the amount of processing of speech data can be kept to a minimum. The algorithm proposed for locating the endpoints of an utterance is based on two measures of the signal, zero crossing rate and energy. The algorithm is inherently capable of performing correctly in any reasonable acoustic environment in which the signal-to-noise ratio is on the order of 30 dB or better. The algorithm has been tested over a variety of recording conditions and for a large number of speakers and has been found to perform well across all tested conditions. <s> BIB002 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> This paper presents several digital signal processing methods for representing speech. Included among the representations are simple waveform coding methods; time domain techniques; frequency domain representations; nonlinear or homomorphic methods; and finaIly linear predictive coding techniques. The advantages and disadvantages of each of these representations for various speech processing applications are discussed. <s> BIB003 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> Several parametric representations of the acoustic signal were compared with regard to word recognition performance in a syllable-oriented continuous speech recognition system. The vocabulary included many phonetically similar monosyllabic words, therefore the emphasis was on the ability to retain phonetically significant acoustic information in the face of syntactic and duration variations. For each parameter set (based on a mel-frequency cepstrum, a linear frequency cepstrum, a linear prediction cepstrum, a linear prediction spectrum, or a set of reflection coefficients), word templates were generated using an efficient dynamic warping method, and test data were time registered with the templates. A set of ten mel-frequency cepstrum coefficients computed every 6.4 ms resulted in the best performance, namely 96.5 percent and 95.0 percent recognition with each of two speakers. The superior performance of the mel-frequency cepstrum coefficients may be attributed to the fact that they better represent the perceptually relevant aspects of the short-term speech spectrum. <s> BIB004 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> Accurate location of the endpoints of spoken words and phrases is important for reliable and robust speech recognition. The endpoint detection problem is fairly straightforward for high-level speech signals in low-level stationary noise environments (e.g., signal-to-noise ratios greater than 30-dB rms). However, this problem becomes considerably more difficult when either the speech signals are too low in level (relative to the background noise), or when the background noise becomes highly nonstationary. Such conditions are often encountered in the switched telephone network when the limitation on using local dialed-up lines is removed. In such cases the background noise is often highly variable in both level and spectral content because of transmission line characteristics, transients and tones from the line and/or from signal generators, etc. Conventional speech endpoint detectors have been shown to perform very poorly (on the order of 50-percent word detection) under these conditions. In this paper we present an improved word-detection algorithm, which can incorporate both vocabulary (syntactic) and task (semantic) information, leading to word-detection accuracies close to 100 percent for isolated digit detection over a wide range of telephone transmission conditions. <s> BIB005 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> A novel speech analysis method which uses several established psychoacoustic concepts, the perceptually based linear predictive analysis (PLP), models the auditory spectrum by the spectrum of the low-order all-pole model. The auditory spectrum is derived from the speech waveform by critical-band filtering, equal-loudness curve pre-emphasis, and intensity-loudness root compression. We demonstrate through analysis of both synthetic and natural speech that psychoacoustic concepts of spectral auditory integration in vowel perception, namely the F1, F2' concept of Carlson and Fant and the 3.5 Bark auditory integration concept of Chistovich, are well modeled by the PLP method. A complete speech analysis-synthesis system based on the PLP method is also described in the paper. <s> BIB006 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> The use of instantaneous and transitional spectral representations of spoken utterances for speaker recognition is investigated. Linear-predictive-coding (LPC)-derived cepstral coefficients are used to represent instantaneous spectral information, and best linear fits of each cepstral coefficient over a specified time window are used to represent transitional information. An evaluation has been carried out using a database of isolated digit utterances over dialed-up telephone lines by 10 talkers. Two vector quantization (VQ) codebooks, instantaneous and transitional, were constructed from each speaker's training utterances. The experimental results show that the instantaneous and transitional representations are relatively uncorrelated, thus providing complementary information for speaker recognition. A rectangular window of approximately 100 ms duration provides an effective estimate of the transitional spectral features for speaker recognition. Also, simple transmission channel variations are shown to affect both the instantaneous spectral representations and the corresponding recognition performance significantly, while the transitional representations and performance are relatively resistant. > <s> BIB007 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> Two acoustic representations, integrated Mel-scale representation with LDA (IMELDA) and perceptual linear prediction-root power sums (PLP-RPS), both of which have given good results in speech recognition tests, are explored. IMELDA is examined in the context of some related representations. Results of speaker-dependent and independent tests with digits and the alphabet suggest that the optimum PLP order is high and that the effectiveness of PLP-RPS stems not from its modeling of perceptual properties but from its approximation to a desirable statistical property attained exactly by IMELDA. A combined PLP-IMELDA representation is found to be generally more effective than PLP-RPS, but an IMELDA representation derived directly from a filter-bank provides similar results to PLP-IMELDA at a lower computational cost. > <s> BIB008 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> The paper describes a voice activity detector (VAD) that can operate reliably in SNRs down to 0 dB and detect most speech at −5 dB. The detector applies a least-squares periodicity estimator to the input signal, and triggers when a significant amount of periodicity is found. It does not aim to find the exact talkspurt boundaries and, consequently, is most suited to speech-logging applications where it is easy to include a small margin to allow for any missed speech. The paper discusses the problem of false triggering on nonspeech periodic signals and shows how robustness to these signals can be achieved with suitable preprocessing and postprocessing. <s> BIB009 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> Two models, the temporal decomposition and the multivariate linear prediction, of the spectral evolution of speech signals capable of processing some aspects of the speech variability are presented. A series of acoustic-phonetic decoding experiments, characterized by the use of spectral targets of the temporal decomposition techniques and a speaker-dependent mode, gives good results compared to a reference system (i.e., 70% vs. 60% for the first choice). Using the original method developed by Laforia, a series of text-independent speaker recognition experiments, characterized by a long-term multivariate auto-regressive modelization, gives first-rate results (i.e., 98.4% recognition rate for 420 speakers) without using more than one sentence. Taking into account the interpretation of the models, these results show how interesting the cinematic models are for obtaining a reduced variability of the speech signal representation. > <s> BIB010 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> 1. Fundamentals of Speech Recognition. 2. The Speech Signal: Production, Perception, and Acoustic-Phonetic Characterization. 3. Signal Processing and Analysis Methods for Speech Recognition. 4. Pattern Comparison Techniques. 5. Speech Recognition System Design and Implementation Issues. 6. Theory and Implementation of Hidden Markov Models. 7. Speech Recognition Based on Connected Word Models. 8. Large Vocabulary Continuous Speech Recognition. 9. Task-Oriented Applications of Automatic Speech Recognition. <s> BIB011 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> This paper describes the results of experiments to investigate the integration of MLP (multilayer perceptron) and HMM (hidden Markov modeling) techniques in the task of fixed-text speaker verification. A large speech database collected over the telephone network was used to evaluate the algorithm. Speech data for each speaker was automatically segmented using a supervised HMM-Viterbi decoding scheme and an MLP was trained with this segmented data. The output scores of the MLP, after appropriate scaling were used as observation probabilities in a Viterbi realignment and scoring step. Intra-speaker and inter-speaker scores were generated by training the HMM-MLP system for each speaker and testing against speech data for the same speaker and against all other speakers, who shared utterances of identical text. Our results show that MLP classifiers combined with HMMs improve speaker discrimination by 20% over conventional HMM algorithms for speaker verification. > <s> BIB012 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> We describe current approaches to text-independent speaker identification based on probabilistic modeling techniques. The probabilistic approaches have largely supplanted methods based on comparisons of long-term feature averages. The probabilistic approaches have an important and basic dichotomy into nonparametric and parametric probability models. Nonparametric models have the advantage of being potentially more accurate models (though possibly more fragile) while parametric models that offer computational efficiencies and the ability to characterize the effects of the environment by the effects on the parameters. A robust speaker-identification system is presented that was able to deal with various forms of anomalies that are localized in time, such as spurious noise events and crosstalk. It is based on a segmental approach in which normalized segment scores formed the basic input for a variety of robust 43% procedures. Experimental results are presented, illustrating 59% the advantages and disadvantages of the different procedures. 64%. We show the role that cross-validation can play in determining how to weight the different sources of information when combining them into a single score. Finally we explore a Bayesian approach to measuring confidence in the decisions made, which enabled us to reject the consideration of certain tests in order to achieve an improved, predicted performance level on the tests that were retained. > <s> BIB013 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> Various linear predictive (LP) analysis methods are studied and compared from the points of view of robustness to noise and of application to speaker identification. The key to the success of the LP techniques is in separating the vocal tract information from the pitch information present in a speech signal even under noisy conditions. In addition to considering the conventional, one-shot weighted least-squares methods, the authors propose three other approaches with the above point as a motivation. The first is an iterative approach that leads to the weighted least absolute value solution. The second is an extension of the one-shot least-squares approach and achieves an iterative update of the weights. The update is a function of the residual and is based on minimizing a Mahalanobis distance. Third, the weighted total least-squares formulation is considered. A study of the deviations in the LP parameters is done when noise (white Gaussian and impulsive) is added to the speech. It is revealed that the most robust method depends on the type of noise. Closed-set speaker identification experiments with 20 speakers are conducted using a vector quantizer classifier trained on clean speech. The relative performance of the various LP approaches depends on the type of speech material used for testing. > <s> BIB014 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> This chapter overviews recent advances in speaker recognition technology. The first part of the chapter discusses general topics and issues. Speaker recognition can be divided in two ways: (a) speaker identification and verification, and (b) text-dependent and text-independent methods. The second part of the paper is devoted to discussion of more specific topics of recent interest which have led to interesting new approaches and techniques. They include parameter/distance normalization techniques, model adaptation techniques, VQ-/ergodic-HMM-based text-independent recognition methods, and a text-prompted recognition method. The chapter concludes with a short discussion assessing the current status and possibilities for the future. <s> BIB015 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> A tutorial on the design and development of automatic speaker-recognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or to verify a person's claimed identity. Speech processing and the basic components of automatic speaker-recognition systems are shown and design tradeoffs are discussed. Then, a new automatic speaker-recognition system is given. This recognizer performs with 98.9% correct decalcification. Last, the performances of various systems are compared. <s> BIB016 </s> A review of speech-based bimodal recognition <s> II. FRONT-END PROCESSING <s> The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques and methods imported from statistical learning theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field. <s> BIB017
The common front-end processes for speech-based recognition are signal conditioning, segmentation, and feature extraction. Signal conditioning typically takes the form of noise removal. Segmentation is concerned with the demarcation of signal portions conveying relevant acoustic or visual speech. Feature extraction generally acts as a dimensionality reduction procedure which, ideally, retains information possessing high discrimination power, high stability, and also for speaker recognition, good resistance to mimicry. Dimensionality reduction may mitigate the "curse of dimensionality." The latter relates to the relation between the dimension of the input pattern space and the number of classifier parameters, which influences the amount of data required for classifier training BIB017 . To obtain reliable estimates of classifier parameters, training data volume should increase with the dimension of the input space. The reliability of parameter estimates may affect classification accuracy. Segmentation and feature extraction can have an adverse effect on recognition. They may retain unwanted information or inadvertently discard important information for recognition. Also, the extracted features may fail to match the assumptions incorporated in the classifier. For example, some classifiers minimize their parameter-estimation requirements by assuming that features are uncorrelated. Accurate segmentation and optimal feature extraction are challenging. A. Acoustic Speech Processing 1) Segmentation: Separation of speech from nonspeech material often employs energy thresholding BIB005 , zero-crossing rate, and periodicity measures BIB002 , BIB009 . Often, several information sources are used jointly , BIB002 . In addition to heuristic decision procedures, conventional pattern recognition techniques have also been used for speech segmentation. This is typified by classification of speech events, based on vector-quantization (VQ) or hidden Markov models (HMMs) BIB012 . 2) Feature Extraction: Many speech feature extraction techniques aim at obtaining a parametric representation of speech, based on models which often embed knowledge about speech production or perception by humans BIB011 . a) Speech production model: The human vocal apparatus is often modeled as a time-varying filter excited by a wide-band signal; this model is known as a source-filter or excitation-modulation model BIB003 . The time-varying filter represents the acoustic transmission characteristics of the vocal tract and nasal cavity, together with the spectral characteristics of glottal pulse shape and lip radiation. Most acoustic speech models assume that the excitation emanates from the lower end of the vocal tract. Such models may be unsuitable for speech sounds, such as fricatives, which result from excitation that occurs somewhere else in the vocal tract BIB016 . b) Basic features: Often, acoustic speech features are short-term spectral representations. For recognition tasks, parameterizations of the vocal tract transfer function are invariably preferred to excitation characteristics, such as pitch and intensity. However, these discarded excitation parameters may contain valuable information. Cepstral features are very widely used. The cepstrum is the discrete cosine transform (DCT) of the logarithm of the short-term spectrum. The DCT yields virtually uncorrelated features, and this may allow a reduction of the parameter count for the classifier. For example, diagonal covariance matrices may be used instead of full matrices. The DCT also packs most of the acoustic information into the low-order features; hence, allowing the reduction of the input space dimension.The cepstrum can be obtained through linear predictive coding (LPC) analysis , BIB014 or Fourier transformation BIB013 . Variants of the standard cepstrum include the popular mel-warped cepstrum or mel frequency cepstral coefficients (MFCCs) BIB004 (see Fig. 2 ) and the perceptual linear predictive (PLP) cepstrum BIB006 . In short-term spectral estimation, the speech signal is first divided into blocks of samples called frames. A windowing function, such as the Hamming window, is usually applied to the speech frame before the short-term log-power spectrum is computed. In the case of MFCCs, the spectrum is smoothed typically by a bank of triangular filters, the passbands of which are laid out on a frequency scale known as mel scale. The latter is approximately linear below 1 kHz and logarithmic above 1 kHz; the mel scale effectively reduces the contribution of higher frequencies to the recognition. Finally, a DCT yields the MFCCs. By removing the cepstral mean, MFCCs can be made fairly insensitive to time-invariant distortion introduced by the communication channel. In addition, given the low cross correlation of MFCCs, their covariance can be modeled with a diagonal matrix. MFCCs are notable for their good performance in both speech and speaker recognition. c) Derived features: High-level features may be obtained from a temporal sequence of basic feature vectors or from the statistical distribution of the pattern space spanned by basic features. These derived features are typified by first-order or higher order dynamic (also known as transitional) features such as delta or delta-delta cepstral coefficients , BIB007 and statistical dynamic features represented by the multivariate autoregressive features proposed in BIB010 . The delta cepstrum is usually computed by applying a linear regression over the neighborhood of the current cepstral vector; the regression typically spans approximately 100 ms. The use of delta features mitigates the unsuitability of the assumption of temporal statistical independence often made in classifiers. Other high-level features are the long-term spectral mean, variance or standard deviation, and the covariance of basic features , BIB015 . Some high-level features aim at reducing dimensionality through a transformation that produces statistically orthogonal features and packs most of the variance into few features. Common transforms are based on principal component anal- , statistical discriminant analysis optimizing the F-ratio such as linear discriminant analysis (LDA) BIB001 , and integrated mel-scale representation with LDA (IMELDA) BIB008 . The latter is LDA applied to static spectral information, possibly combined with dynamic spectral information, output by a mel-scale filter bank. Composite features are sometimes generated by a simple concatenation of different types of features .
A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Though technology in speech recognition has progressed recently, Automatic Speech Recognition (ASR) is vulnerable to noise. Lip-information is thought to be useful for speech recognition in noisy situations, such as in a factory or in a car. This paper describes speech recognition enhancement by lip-information. Two types of usage are dealt with. One is the detection of start and stop of speech from lip-information. This is the simplest usage of lip-information. The other is lip-pattern recognition, and it is used for speech recognition together with sound information. The algorithms for both usages are proposed, and the experimental system shows they work well. The algorithms proposed here are composed of simple image-processing. Future progress in image-processing will make it possible to realize them in real-time. <s> BIB001 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Abstract The feasibility of using isodensity lines for human face identification system is presented through experimental investigation. Instead of using feature points extracted from the faces, as is done in the conventional face matching systems, the technique presented uses gray level isodensity line maps of the faces. Only simple template matching is then required to match the individual isodensity lines. The preprocessing required, the properties of isodensity lines and some considerations for practical implementation are also discussed. The results show a 100% accuracy in matching same persons and a 100% accuracy in discriminating different persons (including persons with spectacles). <s> BIB002 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Abstract This paper describes a neural approach intended to improve the performance of an automatic speech recognition system for unrestricted speakers by using not only voice sound features but also image features of the mouth shape. In particular, we used the natural sample voice signals and mouth shape images that were acquired in the general environment, neither in the sound isolation room nor under specific lighting conditions. The FFT power spectrum of acoustic speech was used as the voice feature. In addition, the gray level image, binary image and geometrical shape features of the mouth were used as the compensatory information, and compared which kinds of image features are effective. For unrestricted speakers, a vowel recognition rate of about 80% was obtained using only voice features, but this increased to some 92% when voice features plus binary images were used. This method can be applied not only to the improvement of voice recognition, but also to aid the communication of hearing-impaired people. <s> BIB003 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Locating facial features is crucial for various face recognition schemes. The authors suggest a robust facial feature detector based on a generalized symmetry interest operator. No special tuning is required if the face occupies 15-60% of the image. The operator was tested on a large face data base with a success rate of over 95%. > <s> BIB004 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Two new algorithms for computer recognition of human faces, one based on the computation of a set of geometrical features, such as nose width and length, mouth position, and chin shape, and the second based on almost-gray-level template matching, are presented. The results obtained for the testing sets show about 90% correct recognition using geometrical features and perfect recognition using template matching. > <s> BIB005 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> We have developed visual preprocessing algorithms for extracting phonologically relevant features from the grayscale video image of a speaker, to provide speaker-independent inputs for an automatic lipreading ("speechreading") system. Visual features such as mouth open/closed, tongue visible/not-visible, teeth visible/notvisible, and several shape descriptors of the mouth and its motion are all rapidly computable in a manner quite insensitive to lighting conditions. We formed a hybrid speechreading system consisting of two time delay neural networks (video and acoustic) and integrated their responses by means of independent opinion pooling - the Bayesian optimal method given conditional independence, which seems to hold for our data. This hybrid system had an error rate 25% lower than that of the acoustic subsystem alone on a five-utterance speaker-independent task, indicating that video can be used to improve speech recognition. <s> BIB006 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> This paper describes a method of real-time facial-feature extraction which is based on matching techniques. The method is composed of facial-area extraction and mouth-area extraction using colour histogram matching, and eye-area extraction using template matching. By the combination of these methods, we can realize real-time processing, user-independent recognition and tolerance to changes of the environment. Also, this paper touches on neural networks which can extract characteristics for recognizing the shape of facial parts. The methods were implemented in an experimental image processing system, and we discuss the cases that the system is applied to man-machine interface using facial gesture and to sign language translation. <s> BIB007 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Abstract The human face is a complex pattern. Finding human faces automatically in a scene is a difficult yet significant problem. It is the first important step in a fully automatic human face recognition system. In this paper a new method to locate human faces in a complex background is proposed. This system utilizes a hierarchical knowledge-based method and consists of three levels. The higher two levels are based on mosaic images at different resolutions. In the lower level, an improved edge detection method is proposed. In this research the problem of scale is dealt with, so that the system can locate unknown human faces spanning a wide range of sizes in a complex black-and-white picture. Some experimental results are given. <s> BIB008 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> We present the development of a modular system for flexible human-computer interaction via speech. The speech recognition component integrates acoustic and visual information (automatic lip-reading) improving overall recognition, especially in noisy environments. The image of the lips, constituting the visual input, is automatically extracted from the camera picture of the speaker's face by the lip locator module. Finally, the speaker's face is automatically acquired and followed by the face tracker sub-system. Integration of the three functions results in the first bi-modal speech recognizer allowing the speaker reasonable freedom of movement within a possibly noisy room while continuing to communicate with the computer via voice. Compared to audio-alone recognition, the combined system achieves a 20 to 50 percent error rate reduction for various signal/noise conditions. <s> BIB009 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> We propose a new speech communication system to convert oral motion images into speech. We call this system "the image input microphone." It provides high security and is not affected by acoustic noise because it is not necessary to input the actual utterance. This system is especially promising as a speaking-aid system for people whose vocal cords are injured. Since this is a basic investigation of media conversion from image to speech, we focus on vowels, and conduct experiments on media conversion of vowels. The vocal-tract transfer function and the source signal for driving this filter are estimated from features of the lips. These features are extracted from oral images in B learning data set, then speech is synthesized by this filter inputted with an appropriate driving signal. The performance of this system is evaluated by hearing tests of synthesized speech. The mean recognition rate for the test data set was 76.8%. We also investigate the effects of practice by iterative listening. The mean recognition rate rises from 69.4% to over 90% after four tests over four days. Consequently, we conclude the proposed system has potential as a method of nonacoustic communication. > <s> BIB010 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> The robust acquisition of facial features needed for visual speech processing is fraught with difficulties which greatly increase the complexity of the machine vision system. This system must extract the inner lip contour from facial images with variations in pose, lighting, and facial hair. This paper describes a face feature acquisition system with robust performance in the presence of extreme lighting variations and moderate variations in pose. Furthermore, system performance is not degraded by facial hair or glasses. To find the position of a face reliably we search the whole image for facial features. These features are then combined and tests are applied, to determine whether any such combination actually belongs to a face. In order to find where the lips are, other features of the face, such as the eyes, must be located as well. Without this information it is difficult to reliably find the mouth in a complex image. Just the mouth by itself is easily missed or other elements in the image can be mistaken for a mouth. If camera position can be constrained to allow the nostrils to be viewed, then nostril tracking is used to both reduce computation and provide additional robustness. Once the nostrils are tracked from frame to frame using a tracking window the mouth area can be isolated and normalized for scale and rotation. A mouth detail analysis procedure is then used to estimate the inner lip contour and teeth and tongue regions. The inner lip contour and head movements are then mapped to synthetic face parameters to generate a graphical talking head synchronized with the original human voice. This information can also be used as the basis for visual speech features in an automatic speechreading system. Similar features were used in our previous automatic speechreading systems. <s> BIB011 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Developments in dynamic contour tracking permit sparse representation of the outlines of moving contours. Given the increasing computing power of general-purpose workstations it is now possible to track human faces and parts of faces in real-time without special hardware. This paper describes a real-time lip tracker that uses a Kalman filter based dynamic contour to track the outline of the lips. Two alternative lip trackers, one that tracks lips from a profile view and the other from a frontal view, were developed to extract visual speech recognition features from the lip contour. In both cases, visual features have been incorporated into an acoustic automatic speech recogniser. Tests on small isolated-word vocabularies using a dynamic time warping based audio-visual recogniser demonstrate that real-time, contour-based lip tracking can be used to supplement acoustic-only speech recognisers enabling robust recognition of speech in the presence of acoustic noise. <s> BIB012 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> There has recently been increasing interest in the idea of enhancing speech recognition by the use of visual information derived from the face of the talker. This paper demonstrates the use of nonlinear image decomposition, in the form of a "sieve", applied to the task of visual speech recognition. Information derived from the mouth region is used in visual and audio-visual speech recognition of a database of the letters A-Z for four talkers. A scale histogram is generated directly from the gray-scale pixels of a window containing the talker's mouth on a per-frame basis. Results are presented for visual-only, audio-only and a simple audio-visual case. <s> BIB013 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Since the Fifties, several experiments have been run to evaluate the “benefit of lip-reading” on speech intelligibility, all presenting a natural face speaking at different levels of background noise: Sumby and Pollack, 1954; Neely, 1956; Erber, 1969; Binnie et al., 1974; Erber, 1975. We here present a similar experiment run with French stimuli. <s> BIB014 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> We have designed and implemented a lipreading system that recognizes isolated words using only color video of human lips (without acoustic data). The system performs video recognition using "snakes" to extract visual features of geometric space, Karhunen-Loeve transform (KLT) to extract principal components in the color eigenspace, and hidden Markov models (HMM's) to recognize the combined visual features sequences. With the visual information alone, we were able to achieve 94% accuracy for ten isolated words. <s> BIB015 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> This paper describes an active-camera real-time system for tracking, shape description, and classification of the human face and mouth using only an SGI Indy computer. The system is based on use of 2-D blob features, which are spatially-compact clusters of pixels that are similar in terms of low-level image properties. Patterns of behavior (e.g., facial expressions and head movements) can be classified in real-time using Hidden Markov Model (HMM) methods. The system has been tested on hundreds of users and has demonstrated extremely reliable and accurate performance. Typical classification accuracies are near 100%. <s> BIB016 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Humans use visual as well as auditory speech signals to recognize spoken words. A variety of systems have been investigated for performing this task. The main purpose of this research was to systematically compare the performance of a range of dynamic visual features on a speechreading task. We have found that normalization of images to eliminate variation due to translation, scale, and planar rotation yielded substantial improvements in generalization performance regardless of the visual representation used. In addition, the dynamic information in the difference between successive frames yielded better performance than optical-flow based approaches, and compression by local low-pass filtering worked surprisingly better than global principal components analysis (PCA). These results are examined and possible explanations are explored. <s> BIB017 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> This paper presents a novel technique for the tracking and extraction of features from lips for the purpose of speaker identification. In noisy or other adverse conditions, identification performance via the speech signal can significantly reduce, hence additional information which can complement the speech signal is of particular interest. In our system, syntactic information is derived from chromatic information in the lip region. A model of the lip contour is formed directly from the syntactic information, with no minimization procedure required to refine estimates. Colour features are then extracted from the lips via profiles taken around the lip contour. Further improvement in lip features is obtained via linear discriminant analysis (LDA). Speaker models are built from the lip features based on the Gaussian mixture model (GMM). Identification experiments are performed on the M2VTS database, with encouraging results. <s> BIB018 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> Active contours or snakes are widely used in object segmentation for their ability to integrate feature extraction and pixel candidate linking in a single energy minimizing process. But the sensitivity to parameters values and initialization is also a widely known problem. The performance of snakes can be enhanced by better initialization close to the desired solution. We present a fine mouth region of interest (ROI) extraction using gray level image and corresponding gradient information. We link this technique with an original snake method. The automatic snakes use spatially varying coefficients to remain along its evolution in a mouth-like shape. Our experimentations on a large image database prove its robustness regarding speakers change of the ROI mouth extraction and automatic snakes algorithms. The main application of our algorithms is video-conferencing. <s> BIB019 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> This paper evaluates lip features for person recognition, and compares the performance with that of the acoustic signal. Recognition accuracy is found to be equivalent in the two domains, agreeing with the findings of Chibelushi (1997). The optimum dynamic window length for both acoustic and visual modalities is found to be about 100 ms. Recognition performance of the upper lip is considerably better than the lower lip, achieving 15% and 35% identification error rates respectively, using a single digit test and training token. <s> BIB020 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> We present a robust technique for tracking a set of pre-determined points on a human face. To achieve robustness, the Kanade-Lucas-Tomasi point tracker is extended and specialised to work on facial features by embedding knowledge about the configuration and visual characteristics of the face. The resulting tracker is designed to recover from the loss of points caused by tracking drift or temporary occlusion. Performance assessment experiments have been carried out on a set of 30 video sequences of several facial expressions. It is shown that using the original Kanade-Lucas-Tomasi tracker, some of the points are lost, whereas using the new method described in this paper, all lost points are recovered with no or little displacement error. <s> BIB021 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> We propose a three-stage pixel based visual front end for automatic speechreading (lipreading) that results in improved recognition performance of spoken words or phonemes. The proposed algorithm is a cascade of three transforms applied to a three-dimensional video region of interest that contains the speaker's mouth area. The first stage is a typical image compression transform that achieves a high "energy", reduced-dimensionality representation of the video data. The second stage is a linear discriminant analysis based data projection, which is applied to a concatenation of a small number of consecutive image transformed video data. The third stage is a data rotation by means of a maximum likelihood linear transform. Such a transform optimizes the likelihood of the observed data under the assumption of their class conditional Gaussian distribution with diagonal covariance. We apply the algorithm to visual-only 52-class phonetic and 27-class visemic classification on a 162-subject, 7-hour long, large vocabulary, continuous speech audio-visual dataset. We demonstrate significant classification accuracy gains by each added stage of the proposed algorithm, which, when combined, can reach up to 27% improvement. Overall, we achieve a 49% (38%) visual-only frame level phonetic classification accuracy with (without) use of test set phone boundaries. In addition, we report improved audio-visual phonetic classification over the use of a single-stage image transform visual front end. <s> BIB022 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate. <s> BIB023 </s> A review of speech-based bimodal recognition <s> 1) Segmentation: <s> A method for detecting and describing the features of faces using deformable templates is described. The feature of interest, an eye for example, is described by a parameterized template. An energy function is defined which links edges, peaks, and valleys in the image intensity to corresponding properties of the template. The template then interacts dynamically with the image, by altering its parameter values to minimize the energy function, thereby deforming itself to find the best fit. The final parameter values can be used as descriptors for the features. This method is demonstrated by showing deformable templates detecting eyes and mouths in real images. > <s> BIB024
Visual speech requires both spatial and temporal segmentation. Temporal endpoints may be derived from the acoustic signal endpoints, or computed after spatial segmentation in the visual domain. Many spatial segmentation techniques impose restrictive assumptions, or rely on segmentation parameters tuned for a specific data set. As a result, robust location of the face or its constituents in unconstrained scenes is beyond the capability of most current techniques. At times, the spatial segmentation task is eased artificially through the use of lipstick or special reflective markers, or by discarding most facial information and capturing images of the mouth only. Face segmentation relies on image attributes related to facial surface properties, such as brightness, texture, and color BIB015 , BIB011 , BIB018 , possibly accompanied by their dynamic characteristics. Face segmentation techniques can be grouped into the broad categories of intra-image analysis, inter-image analysis, or a combination of the two. Intra-image approaches may be subdivided into conventional, connectionist, symbolic, and hybrid methods. Conventional methods include template-based techniques BIB024 , signature-based techniques BIB007 , edge or contour following BIB002 , and symmetry detection BIB004 . Connectionist methods are built around artificial neural networks such as radial basis functions, self-organizing neural networks, and (most frequently) multilayer perceptrons (MLPs). Symbolic methods are often based on a knowledge-based system BIB008 . Hybrid methods combining the above techniques are also available BIB009 . Conventional and symbolic methods tend to perform poorly in the presence of facial image variation. In comparison, when sufficient representative training data is available, connectionist methods may display superior robustness to changes in illumination and to geometric transformations. This is due to the ability of neural networks to learn without relying on explicit assumptions about underlying data models. Face segmentation often exploits heuristics about facial shape, configuration, and photometric characteristics; implicit or explicit models of the head or facial components are generally used BIB005 , BIB016 . The mouth is commonly modeled by deformable templates , dynamic contour models BIB015 , BIB019 , BIB012 , or statistical models of shape and brightness . The segmentation then takes the form of optimization of the fit between the model and the image, typically using numerical optimization techniques such as steepest descent, simulated annealing, or genetic algorithms. A downside of approaches based on iterative search, is that speedy and accurate segmentation requires initialization of model position and shape relatively close to the target mouth. In addition, segmentation based on such models is usually sensitive to facial hair, facial pose, illumination, and visibility of the tongue or teeth. Some approaches for enhancing the robustness of lip tracking are proposed in BIB016 and BIB021 . The approach described in BIB016 incorporates adaptive modeling of image characteristics, which is anchored on Gaussian mixture models (GMMs) of the color and geometry of the mouth and face. BIB021 enhances the robustness of the Kanade-Lucas-Tomasi tracker by embedding heuristics about facial characteristics. 2) Feature Extraction: Although raw pixel data may be used directly by a classifier , , BIB003 , , feature extraction is often applied. Despite assertions that much of lipreading information is conveyed dynamically , , features relating to the static configuration of visible articulators are fairly popular. Depending on the adjacency and area coverage of pixels used during feature extraction, approaches for visual speech feature extraction may be grouped into mouth-window methods and landmark methods. These two approaches are sometimes used conjunctively BIB015 . a) Mouth-window methods: These methods extract features from all pixels in a window covering the mouth region. Examples of such methods are: binarization of image pixel intensities , aggregation of pixel intensity differences between image frames BIB001 , computation of the mean pixel luminance of the oral area BIB010 , 2-D Fourier analysis , , DCT BIB022 , discrete wavelet transform (DWT) BIB022 , PCA ("eigenlips") , , , LDA , and nonlinear image decomposition based on the "sieve" algorithm BIB013 . b) Landmark methods: In landmark methods, a group of key points is identified in the oral area. Features extracted from these key points may be grouped into three main subgroups: 1) kinematic features; 2) photometric features; and 3) geometric features. Examples of kinematic features are velocity of key points BIB006 . Photometric features may be in the form of intensity and temporal intensity gradient of key points BIB006 . Typical geometric features are the width, height, area, and perimeter of the oral cavity ; and distances or angles between key points located on the lip margins, mouth corners, or jaw BIB014 , BIB006 . Spectral encoding of lip shape is used in BIB020 , where it is shown to yield compact feature vectors, which are fairly insensitive to the reduction of video frame rate. Geometric and photometric features are sometimes used together BIB023 . This is because, although shape features are less sensitive to lighting conditions than photometric features, they discard most of the information conveyed by the visibility of the tongue and teeth. c) Evaluation of feature types: Investigations reported in show no significant difference in speech classification accuracy obtained from raw pixel intensities, PCA and LDA features. A study of mouth-window dynamic features for visual speech recognition found that optical-flow features are outperformed by the difference between successive image frames BIB017 . It was also noted that local low-pass filtering of images yields better accuracy than PCA BIB017 . In BIB022 , no significant difference in recognition accuracy was observed between DCT, PCA, and DWT features. Compared to landmark features, mouth-window features may display higher sensitivity to changes of lighting and to the spatial or optical settings of the camera. Moreover, pixel-based features may be afflicted by the curse of dimensionality. Additionally, although pixel intensities capture more information than contour-based features, pixel data may contain many irrelevant details.
A review of speech-based bimodal recognition <s> A. Parametric Models <s> This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> A. Parametric Models <s> 1. Fundamentals of Speech Recognition. 2. The Speech Signal: Production, Perception, and Acoustic-Phonetic Characterization. 3. Signal Processing and Analysis Methods for Speech Recognition. 4. Pattern Comparison Techniques. 5. Speech Recognition System Design and Implementation Issues. 6. Theory and Implementation of Hidden Markov Models. 7. Speech Recognition Based on Connected Word Models. 8. Large Vocabulary Continuous Speech Recognition. 9. Task-Oriented Applications of Automatic Speech Recognition. <s> BIB002 </s> A review of speech-based bimodal recognition <s> A. Parametric Models <s> We describe current approaches to text-independent speaker identification based on probabilistic modeling techniques. The probabilistic approaches have largely supplanted methods based on comparisons of long-term feature averages. The probabilistic approaches have an important and basic dichotomy into nonparametric and parametric probability models. Nonparametric models have the advantage of being potentially more accurate models (though possibly more fragile) while parametric models that offer computational efficiencies and the ability to characterize the effects of the environment by the effects on the parameters. A robust speaker-identification system is presented that was able to deal with various forms of anomalies that are localized in time, such as spurious noise events and crosstalk. It is based on a segmental approach in which normalized segment scores formed the basic input for a variety of robust 43% procedures. Experimental results are presented, illustrating 59% the advantages and disadvantages of the different procedures. 64%. We show the role that cross-validation can play in determining how to weight the different sources of information when combining them into a single score. Finally we explore a Bayesian approach to measuring confidence in the decisions made, which enabled us to reject the consideration of certain tests in order to achieve an improved, predicted performance level on the tests that were retained. > <s> BIB003 </s> A review of speech-based bimodal recognition <s> A. Parametric Models <s> This paper introduces and motivates the use of Gaussian mixture models (GMM) for robust text-independent speaker identification. The individual Gaussian components of a GMM are shown to represent some general speaker-dependent spectral shapes that are effective for modeling speaker identity. The focus of this work is on applications which require high identification rates using short utterance from unconstrained conversational speech and robustness to degradations produced by transmission over a telephone channel. A complete experimental evaluation of the Gaussian mixture speaker model is conducted on a 49 speaker, conversational telephone speech database. The experiments examine algorithmic issues (initialization, variance limiting, model order selection), spectral variability robustness techniques, large population performance, and comparisons to other speaker modeling techniques (uni-modal Gaussian, VQ codebook, tied Gaussian mixture, and radial basis functions). The Gaussian mixture speaker model attains 96.8% identification accuracy using 5 second clean speech utterances and 80.8% accuracy using 15 second telephone speech utterances with a 49 speaker population and is shown to outperform the other speaker modeling techniques on an identical 16 speaker telephone speech task. > <s> BIB004 </s> A review of speech-based bimodal recognition <s> A. Parametric Models <s> This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate. <s> BIB005
The static characteristics of voice are sometimes modeled by single-mode Gaussian probability density functions (pdfs) BIB003 . However, the most popular static models are multimode mixtures of multivariate Gaussians BIB004 , commonly known as Gaussian mixture models (GMMs). HMMs are widely used as models of both static and dynamic characteristics of voice BIB005 , BIB001 , BIB002 .
A review of speech-based bimodal recognition <s> 1) Gaussian Mixture Models (GMMs): <s> This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> 1) Gaussian Mixture Models (GMMs): <s> We study the use of discriminative training to construct speaker models for speaker verification and speaker identification. As opposed to conventional training which estimates a speaker's model based only on the training utterances from the same speaker, we use a discriminative training approach which takes into account the models of other competing speakers and formulates the optimization criterion such that speaker recognition error rate on the training data is directly minimized. We also propose a normalized score function which makes the verification formulation consistent with the minimum error training objective. We show that the speaker recognition performance is significantly improved when discriminative training is incorporated. > <s> BIB002 </s> A review of speech-based bimodal recognition <s> 1) Gaussian Mixture Models (GMMs): <s> This paper introduces and motivates the use of Gaussian mixture models (GMM) for robust text-independent speaker identification. The individual Gaussian components of a GMM are shown to represent some general speaker-dependent spectral shapes that are effective for modeling speaker identity. The focus of this work is on applications which require high identification rates using short utterance from unconstrained conversational speech and robustness to degradations produced by transmission over a telephone channel. A complete experimental evaluation of the Gaussian mixture speaker model is conducted on a 49 speaker, conversational telephone speech database. The experiments examine algorithmic issues (initialization, variance limiting, model order selection), spectral variability robustness techniques, large population performance, and comparisons to other speaker modeling techniques (uni-modal Gaussian, VQ codebook, tied Gaussian mixture, and radial basis functions). The Gaussian mixture speaker model attains 96.8% identification accuracy using 5 second clean speech utterances and 80.8% accuracy using 15 second telephone speech utterances with a 49 speaker population and is shown to outperform the other speaker modeling techniques on an identical 16 speaker telephone speech task. > <s> BIB003 </s> A review of speech-based bimodal recognition <s> 1) Gaussian Mixture Models (GMMs): <s> In recent years there has been a significant body of work, both theoretical and experimental, that has established the viability of artificial neural networks (ANN's) as a useful technology for speech recognition. It has been shown that neural networks can be used to augment speech recognizers whose underlying structure is essentially that of hidden Markov models (HMM's). In particular, we have demonstrated that fairly simple layered structures, which we lately have termed big dumb neural networks (BDNN's), can be discriminatively trained to estimate emission probabilities for an HMM. Recently simple speech recognition systems (using context-independent phone models) based on this approach have been proved on controlled tests, to be both effective in terms of accuracy (i.e., comparable or better than equivalent state-of-the-art systems) and efficient in terms of CPU and memory run-time requirements. Research is continuing on extending these results to somewhat more complex systems. In this paper, we first give a brief overview of automatic speech recognition (ASR) and statistical pattern recognition in general. We also include a very brief review of HMM's, and then describe the use of ANN's as statistical estimators. We then review the basic principles of our hybrid HMM/ANN approach and describe some experiments. We discuss some current research topics, including new theoretical developments in training ANN's to maximize the posterior probabilities of the correct models for speech utterances. We also discuss some issues of system resources required for training and recognition. Finally, we conclude with some perspectives about fundamental limitations in the current technology and some speculations about where we can go from here. > <s> BIB004 </s> A review of speech-based bimodal recognition <s> 1) Gaussian Mixture Models (GMMs): <s> We propose a minimum verification error (MVE) training scenario to design and adapt an HMM-based speaker verification system. By using the discriminative training paradigm, we show that customer and background models can be jointly estimated so that the expected number of verification errors (false accept and false reject) on the training corpus are minimized. An experimental evaluation of a fixed password speaker verification task over the telephone network was carried out. The evaluation shows that MVE training/adaptation performs as well as MLE training and MAP adaptation when the performance is measured by the average individual equal error rate (based on a posteriori threshold assignment). After model adaptation, both approaches lead to an individual equal error-rate close to 0.6%. However, experiments performed with a priori dynamic threshold assignment show that MVE adapted models exhibit false rejection and false acceptance rates 45% lower than the MAP adapted models, and therefore lead to the design of a more robust system for practical applications. <s> BIB005 </s> A review of speech-based bimodal recognition <s> 1) Gaussian Mixture Models (GMMs): <s> This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate. <s> BIB006
A GMM represents a probability distribution as a weighted aggregation of Gaussians where is the "observation" (usually corresponding to a feature vector) and the GMM parameters are the mixture weights ( ), the number of mixture components ( ), the mean ( ), and the covariance matrix ( ) of each component. Diagonal covariance matrices are often used for features, such as MFCC, which are characterized by low cross correlation. GMM parameters are often estimated using the Expectation-Maximization (EM) algorithm BIB003 . Being iterative, this algorithm is sensitive to initial conditions. It may also fail to converge if the norm of a covariance matrix approaches zero. 2) Hidden Markov Models (HMMs): HMMs are generative data models, which are well suited for the statistical modeling and recognition of sequential data, such as speech. An HMM embeds two stochastic components (see Fig. 3 ). One component is a Markov chain of hidden states, which models the sequential evolution of observations. The hidden states are not directly observable. The other component is a set of probability distributions of observations. Each state has one distribution, which can be represented by a discrete or a continuous function. This divides HMMs into discrete-density HMMs (DHMMs) and continuous-density HMMs (CHMMs), respectively. In early recognition systems, continuous-valued speech features were vector quantized and each resulting VQ codebook index was then input to a DHMM. A key limitation of this approach is the quantization noise introduced by the vector quantizer and the coarseness of the similarity measures. Most modern systems use CHMMs, where each state is modeled as a GMM. In other words, a GMM is equivalent to a single-state HMM. Although, in theory, Gaussian mixtures can represent complex pdfs, this may not be so in practice. Hence, HMMs sometimes incorporate MLPs, which estimate state observation probabilities BIB006 , BIB004 . The most common HMM learning rule is the Baum-Welch algorithm, which is an iterative maximum likelihood estimation of the state and state-transition parameters BIB001 . Due to the iterative nature of the learning, the estimated parameters depend on their initial settings. HMMs are often trained as generative models of within-class data. Such HMMs do not capture discriminating information explicitly and hence may give suboptimal recognition accuracy. This has spurred research into discriminative training of HMMs and other generative models BIB002 , BIB005 . Viterbi decoding BIB001 is typically used for efficient exploration of possible state sequences during recognition; it calculates the likelihood that the observed sequence was generated by the HMM. The Viterbi algorithm is essentially a dynamic programming method, which identifies the state sequence that maximizes the probability of occurrence of an observation sequence. In practice, to minimize the number of parameters, HMMs are confined to relatively small or constrained state spaces. Practical HMMs are typically first-order Markov models. Such models may be ill-suited for higher order dynamics; in the case of speech, the required sequential dependence may extend across several states. HMMs for speech or speaker recognition are typically configured as left-right models (see Fig. 3 ). The use of state-specific GMMs increases the number of classifier parameters to estimate. To reduce the parameter count, sharing (commonly referred to as "tying") of parameters is often used. Typically, state parameters are tied across HMMs which possess some states deemed similar.
A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The use of instantaneous and transitional spectral representations of spoken utterances for speaker recognition is investigated. Linear-predictive-coding (LPC)-derived cepstral coefficients are used to represent instantaneous spectral information, and best linear fits of each cepstral coefficient over a specified time window are used to represent transitional information. An evaluation has been carried out using a database of isolated digit utterances over dialed-up telephone lines by 10 talkers. Two vector quantization (VQ) codebooks, instantaneous and transitional, were constructed from each speaker's training utterances. The experimental results show that the instantaneous and transitional representations are relatively uncorrelated, thus providing complementary information for speaker recognition. A rectangular window of approximately 100 ms duration provides an effective estimate of the transitional spectral features for speaker recognition. Also, simple transmission channel variations are shown to affect both the instantaneous spectral representations and the corresponding recognition performance significantly, while the transitional representations and performance are relatively resistant. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The authors present the results of speaker-verification technology development for use over long-distance telephone lines. A description is given of two large speech databases that were collected to support the development of new speaker verification algorithms. Also discussed are the results of discriminant analysis techniques which improve the discrimination between true speakers and imposters. A comparison is made of the performance of two speaker-verification algorithms, one using template-based dynamic time warping, and the other, hidden Markov modeling. > <s> BIB002 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The use of nonmemoryless source coders in speaker recognition problems is studied, and the effects of source variations, including speaking inconsistency and channel mismatch, in source coder designs for the intended application are discussed. It is found that incorporation of memory in source coders in general enhances the speaker recognition accuracy but that more remarkable improvements can be accomplished by properly including potential source variations in the coder design/training. An experiment with a 100-speaker database shows a 99.5% recognition accuracy. > <s> BIB003 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The authors address the problem of speaker recognition using very short utterances, both for training and for recognition. The authors propose to exploit speaker-specific correlations between two suitably defined parameter vector sequences. A nonlinear vectorial interpolation technique is used to capture speaker-specific information, through least-square-error minimization. The experiments show the feasibility of recognizing a speaker among a population of about 100 persons using only an utterance of one word both for training and for recognition. > <s> BIB004 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> A text-independent speaker recognition method using predictive neural networks is described. The speech production process is regarded as a nonlinear process, so the speaker individuality in the speech signal also includes nonlinearity. Therefore, the predictive neural network, which is a nonlinear prediction model based on multilayer perceptrons, is expected to be a more suitable model for representing speaker individuality. For text-independent speaker recognition, an ergodic model which allows transitions to any other state is adopted as the speaker model and one predictive neural network is assigned to each state. The proposed method was compared to distortion-based methods, hidden Markov model (HMM)-based methods, and a discriminative neural-network-based method through text-independent speaker recognition experiments on 24 female speakers. The proposed method gave the highest recognition accuracy of 100.0% and the effectiveness of the predictive neural networks for representing speaker individuality was clarified. > <s> BIB005 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The author presents and evaluates a modular connectionist system for speaker identification. Modularity has emerged as a powerful technique for reducing the complexity of connectionist systems and allowing prior knowledge to be incorporated into their design. Thus, for systems where the amount of training data is limited, modular systems incorporating prior knowledge are likely to generalize significantly better than a monolithic connectionist system. An architecture is developed which achieves speaker identification based on the cooperation of several connectionist expert modules. When tested on a population of 102 speakers extracted from the DARPA-TIMIT database, perfect identification was observed. In a specific comparison with a system based on multivariate autoregressive models, the modular connectionist approach was found to be significantly better in terms of both identification accuracy and speed. > <s> BIB006 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> This paper describes recent improvements to an algorithm for identifying an unknown voice from a set of known voices using unconstrained speech material. These algorithms compare the underlying probability distributions of speech utterances using a method that is free of assumptions regarding the form of the distributions (e.g., Gaussian, etc.). In comparing two utterances, the algorithms accumulate minimum inter-frame distances between frames of the utterances. In recognition tests on the Switchboard database, using a closed population of speakers, we show that the new algorithm performs substantially better than the baseline algorithm. The modifications are segment-based scoring, limiting likelihood ratio estimates for robustness and estimating biases associated with reference files. > <s> BIB007 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> An evaluation of various classifiers for text-independent speaker recognition is presented. In addition, a new classifier is examined for this application. The new classifier is called the modified neural tree network (MNTN). The MNTN is a hierarchical classifier that combines the properties of decision trees and feedforward neural networks. The MNTN differs from the standard NTN in both the new learning rule used and the pruning criteria. The MNTN is evaluated for several speaker recognition experiments. These include closed- and open-set speaker identification and speaker verification. The database used is a subset of the TIMIT database consisting of 38 speakers from the same dialect region. The MNTN is compared with nearest neighbor classifiers, full-search, and tree-structured vector quantization (VQ) classifiers, multilayer perceptrons (MLPs), and decision trees. For closed-set speaker identification experiments, the full-search VQ classifier and MNTN demonstrate comparable performance. Both methods perform significantly better than the other classifiers for this task. The MNTN and full-search VQ classifiers are also compared for several speaker verification and open-set speaker-identification experiments. The MNTN is found to perform better than full-search VQ classifiers for both of these applications. In addition to matching or exceeding the performance of the VQ classifier for these applications, the MNTN also provides a logarithmic saving for retrieval. > <s> BIB008 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The authors evaluate continuous density hidden Markov models (CDHMM), dynamic time warping (DTW) and distortion-based vector quantisation (VQ) for speaker recognition, emphasising the performance of each model structure across incremental amounts of training data. Text-independent (TI) experiments are performed with VQ and CDHMMs, and text-dependent (TD) experiments are performed with DTW, VQ and CDHMMs. For TI speaker recognition, VQ performs better than an equivalent CDHMM with one training version, but is outperformed by CDHMM when trained with ten training versions. For TD experiments, DTW outperforms VQ and CDHMMs for sparse amounts of training data, but with more data the performance of each model is indistinguishable. The performance of the TD procedures is consistently superior to TI, which is attributed to subdividing the speaker recognition problem into smaller speaker-word problems. It is also shown that there is a large variation in performance across the different digits, and it is concluded that digit zero is the best digit for speaker discrimination. <s> BIB009 </s> A review of speech-based bimodal recognition <s> B. Nonparametric Models 1) Reference-Pattern Models: <s> The non-supervised self organizing map of Kohonen (SOM), the supervised learning vector quantization algorithm (LVQ3), and a method based on second-order statistical measures (SOSM) were adapted, evaluated and compared for speaker verification on 57 speakers of a POLYPHONE-like data base. The SOM and LVQ3 were trained by codebooks with 32 and 256 codes and two statistical measures; one without weighting (SOSM1) and another with weighting (SOSM2) were implemented. As the decision criterion, the equal error rate (EER) and best match decision rule (BMDR) were employed and evaluated. The weighted linear predictive cepstrum coefficients (LPCC) and the /spl Delta/LPCC were used jointly as two kinds of spectral speech representations in a single vector as distinctive features. The LVQ3 demonstrates a performance advantage over SOM. This is due to the fact that the LVQ3 allows the long-term fine-tuning of an interested target codebook using speech data from a client and other speakers, whereas the SOM only uses data from the client. The SOSM performs better than the SOM and the LVQ3 for long test utterances, while for short test utterances the LVQ is the best method among the methods studied. <s> BIB010
These models take the form of a store of reference patterns representing the voice-pattern space. To counter misalignments, arising from change in speaking rate, for example, temporal alignment using dynamic time warping (DTW) is often applied during pattern matching. The reference patterns may be taken directly from the original pattern space; this approach is used in k-nearest-neighbor (kNN) classifiers BIB007 . Alternatively, the reference patterns may represent a compressed pattern space, typically obtained through vector averaging. Compressed-pattern-space approaches aim to reduce the storage and computational costs associated with an uncompressed space. They include VQ models BIB001 , BIB009 and the template models used in minimum distance classifiers BIB002 . A conventional VQ model consists of a collection (codebook) of feature-vector centroids. In effect, VQ uses multiple static templates, and hence, it discards potentially useful temporal information. The extension of such memoryless VQ models, into models which possess inherent memory, has been proposed in the form of matrix quantization and trellis VQ models BIB003 . 2) Connectionist Models: These consist of one or several neural networks. The most popular models are the memoryless type, such as MLPs , radial basis functions , neural tree networks BIB008 , Kohonen's self-organizing maps BIB010 , and learning vector quantization . The main connectionist models capable of capturing temporal information are time-delay neural networks BIB006 and recurrent neural networks . However, compared to HMMs, artificial neural networks are generally worse at modeling sequential data. Most neural network models are trained as discriminative models. Predictive models, within a single medium or across acoustic and visual media, are rare , BIB004 , BIB005 . A key strength of neural networks is that their training is generally implemented as a nonparametric, nonlinear estimation, which does not make assumptions about underlying data models or probability distributions.
A review of speech-based bimodal recognition <s> C. Decision Procedure <s> The results of a study aimed at finding the importance of pitch for automatic speaker recognition are presented. Pitch contours were obtained for 60 utterances, each approximately 2‐sec in duration, of 10 female speakers. A data‐reduction procedure based on the Karhunen‐Loeve representation was found effective in representing the pitch information in each contour in a 20‐dimensional space. The data were divided into two portions; one part was used to design the speaker recognition system, while the other part was used to test the effectiveness of the design. The 20‐dimensional vectors representing the pitch contours of the design set were linearly transformed so that the ratio of interspeaker to intraspeaker variance in the transformed space was maximum. A reference utterance was formed for each speaker by averaging the transformed vectors of that speaker. The test utterance was assigned to the speaker corresponding to the reference utterance with the smallest Euclidean distance in the transformed space. ... <s> BIB001 </s> A review of speech-based bimodal recognition <s> C. Decision Procedure <s> The authors present the results of speaker-verification technology development for use over long-distance telephone lines. A description is given of two large speech databases that were collected to support the development of new speaker verification algorithms. Also discussed are the results of discriminant analysis techniques which improve the discrimination between true speakers and imposters. A comparison is made of the performance of two speaker-verification algorithms, one using template-based dynamic time warping, and the other, hidden Markov modeling. > <s> BIB002 </s> A review of speech-based bimodal recognition <s> C. Decision Procedure <s> A way to identify people by voice is discussed. A speaker's spectral information represented by line spectrum pair (LSP) frequencies is used to describe characteristics of the speaker's utterance and the VQ (vector quantization) method is used to model the spectral distribution of each speaker. Some easily computed distances (Euclidean distance, weighted distance by F-ratio, and hard-limited distance) are used to measure the discrimination among speakers. > <s> BIB003 </s> A review of speech-based bimodal recognition <s> C. Decision Procedure <s> Text-independent speaker verification systems typically depend upon averaging over a long utterance to obtain a feature set for classification. However, not all speech is equally suited to the task of speaker verification. An approach to text-independent speaker verification that uses a two-stage classifier is presented. The first stage consists of a speaker-independent phoneme detector trained to recognize a phoneme that is distinctive from speaker to speaker. The second stage is trained to recognize the frames of speech from the target speaker that are admitted by the phoneme detector. A common feature vector based on the linear predictive coding (LPC) cepstrum is projected in different directions for each of these pattern recognition tasks. Results of tests using the described speaker verification system are shown. > <s> BIB004 </s> A review of speech-based bimodal recognition <s> C. Decision Procedure <s> An evaluation of various classifiers for text-independent speaker recognition is presented. In addition, a new classifier is examined for this application. The new classifier is called the modified neural tree network (MNTN). The MNTN is a hierarchical classifier that combines the properties of decision trees and feedforward neural networks. The MNTN differs from the standard NTN in both the new learning rule used and the pruning criteria. The MNTN is evaluated for several speaker recognition experiments. These include closed- and open-set speaker identification and speaker verification. The database used is a subset of the TIMIT database consisting of 38 speakers from the same dialect region. The MNTN is compared with nearest neighbor classifiers, full-search, and tree-structured vector quantization (VQ) classifiers, multilayer perceptrons (MLPs), and decision trees. For closed-set speaker identification experiments, the full-search VQ classifier and MNTN demonstrate comparable performance. Both methods perform significantly better than the other classifiers for this task. The MNTN and full-search VQ classifiers are also compared for several speaker verification and open-set speaker-identification experiments. The MNTN is found to perform better than full-search VQ classifiers for both of these applications. In addition to matching or exceeding the performance of the VQ classifier for these applications, the MNTN also provides a logarithmic saving for retrieval. > <s> BIB005 </s> A review of speech-based bimodal recognition <s> C. Decision Procedure <s> A new algorithm, the hierarchical speaker verification algorithm, is introduced. This algorithm employs a set of unique mapping functions determined from an enrolment utterance that characterize the target voice as a multidimensional martingale random walk process. For sufficiently long verification utterances, the central limit theorem insures that the accumulated scores for the target speaker will be distributed normally about the origin. Impostor speakers, which violate the martingale property, are distributed arbitrarily and widely scattered in the verification space. Excerpts of verification performance experiments are given and extensions to the algorithm for handling noisy channels and speaker template aging are discussed. > <s> BIB006 </s> A review of speech-based bimodal recognition <s> C. Decision Procedure <s> We study the use of discriminative training to construct speaker models for speaker verification and speaker identification. As opposed to conventional training which estimates a speaker's model based only on the training utterances from the same speaker, we use a discriminative training approach which takes into account the models of other competing speakers and formulates the optimization criterion such that speaker recognition error rate on the training data is directly minimized. We also propose a normalized score function which makes the verification formulation consistent with the minimum error training objective. We show that the speaker recognition performance is significantly improved when discriminative training is incorporated. > <s> BIB007
The classifier decision procedure sometimes involves a sequence of consecutive recognition trials BIB002 . At times, it is implemented as a decision tree BIB005 , BIB006 . Some similarity measures are tightly coupled to particular feature types. For speaker verification or open-set identification, a normalization of similarity scores may be necessitated by speech variability BIB007 , . Examples of common similarity measures are: the Euclidean distance (often inverse-variance weighted, or reduced to a city-block distance) BIB001 , BIB003 , the Mahalanobis distance , the likelihood ratio BIB004 , and the arithmetic-harmonic sphericity measure . The Mahalanobis distance measure takes account of feature covariance and de-emphasizes features with high variance; however, reliable estimation of the covariance matrix may require a large amount of training data.
A review of speech-based bimodal recognition <s> B. General Fusion Hierarchy <s> From the Publisher: ::: This invaluable reference offers the most comprehensive introduction available to the concepts of multisensor data fusion. It introduces key algorithms, provides advice on their utilization, and raises issues associated with their implementation. With a diverse set of mathematical and heuristic techniques for combining data from multiple sources, the book shows how to implement a data fusion system, describes the process for algorithm selection, functional architectures and requirements for ancillary software, and illustrates man-machine interface requirements an database issues. <s> BIB001 </s> A review of speech-based bimodal recognition <s> B. General Fusion Hierarchy <s> Sensor fusion models have been characterized in the literature in a number of distinctly different ways: in terms of information levels at which the fusion is accomplished; the objectives of the fusion process, the application domain; the types of sensors employed, the sensor suite configuration and so on. The characterization most commonly encountered in the rapidly growing sensor fusion literature based on level of detail in the information is that of the now well known triplet: data level, feature level, and decision level. We consider here a generalized input-output (I/O) descriptor pair based characterization of the sensor fusion process that can be looked upon as a natural out growth of the trilevel characterization. The fusion system design philosophy expounded here is that an exhaustive exploitation of the sensor fusion potential should explore fusion under all of the different I/O-based fusion modes conceivable under such a characterization. Fusion system architectures designed to permit such exploitation offer the requisite flexibility for developing the most effective fusion system designs for a given application. A second facet of this exploitation is aimed at exploring the new concept of self-improving multisensor fusion system architectures wherein the central (fusion system) and focal (individual sensor subsystems) decision makers mutually enhance the other's performance by providing reinforced learning. A third facet is that of investigating fusion system architectures for environments wherein the different local decision makers may only be capable of narrower decisions that span only a subset of decision choices. The paper discusses these flexible fusion system architectures along with related issues and illustrates them with examples of their application to real-world scenarios. <s> BIB002 </s> A review of speech-based bimodal recognition <s> B. General Fusion Hierarchy <s> Multisensor data fusion is an emerging technology applied to Department of Defense (DoD) areas such as automated target recognition, battlefield surveillance, and guidance and control of autonomous vehicles, and to non-DoD applications such as monitoring of complex machinery, medical diagnosis, and smart buildings. Techniques for multisensor data fusion are drawn from a wide range of areas including artificial intelligence, pattern recognition, statistical estimation and other areas. This paper provides a tutorial on data fusion, introducing data fusion applications, process models, and identification of applicable techniques. Comments are made on the state-of-the-art in data fusion. <s> BIB003 </s> A review of speech-based bimodal recognition <s> B. General Fusion Hierarchy <s> We develop a common theoretical framework for combining classifiers which use distinct pattern representations and show that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision. An experimental comparison of various classifier combination schemes demonstrates that the combination rule developed under the most restrictive assumptions-the sum rule-outperforms other classifier combinations schemes. A sensitivity analysis of the various schemes to estimation errors is carried out to show that this finding can be justified theoretically. <s> BIB004
Sensor fusion deals with the combination of information produced by several sources BIB002 , BIB003 . It has borrowed mathematical and heuristic techniques from a wide array of fields, such as statistics, artificial intelligence, decision theory, and digital signal processing. Theoretical frameworks for sensor fusion have also been proposed BIB004 . In pattern recognition, sensor fusion can be performed at the data level, feature level, or decision level (see Fig. 1 ); hybrid fusion methods are also available BIB001 . Low-level fusion can occur at the data level or feature level. Intermediate-level and high-level fusion typically involves the combination of recognition scores or labels produced as intermediate or final output of classifiers. Hall BIB001 argues that owing to the information loss, which occurs during the transformation of raw data into features and eventually into classifier outputs, classification accuracy is expected to be the lowest for decision fusion. However, it is also known that corruption of information due to noise is potentially highest and requirements for data registration most stringent, at the lower levels of the fusion hierarchy BIB002 , . In addition, low-level fusion is less robust to sensor failure than high-level fusion BIB002 . Moreover, low-level fusion generally requires more training data because it usually involves more free parameters than high-level fusion. It is also easier to upgrade a single-sensor system into a multisensor system based on decision fusion; sensors can be added, without having to retrain any legacy singlesensor classifiers. Additionally, the frequently-used simplifying assumption of independence between sensor-specific data holds better at the decision level, particularly if the classifiers are not of the same type. However, decision fusion might consequently fail to exploit the potentially beneficial correlation present at the lower levels.
A review of speech-based bimodal recognition <s> C. Low-Level Audio-Visual Fusion <s> A bimodal automatic speech recognition system, using simultaneously auditory model and articulatory parameters, is described. Results given for various speaker dependent phonetic recognition experiments, regarding the Italian plosive class, show the usefulness of this approach especially in noisy conditions. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> C. Low-Level Audio-Visual Fusion <s> There has recently been increasing interest in the idea of enhancing speech recognition by the use of visual information derived from the face of the talker. This paper demonstrates the use of nonlinear image decomposition, in the form of a "sieve", applied to the task of visual speech recognition. Information derived from the mouth region is used in visual and audio-visual speech recognition of a database of the letters A-Z for four talkers. A scale histogram is generated directly from the gray-scale pixels of a window containing the talker's mouth on a per-frame basis. Results are presented for visual-only, audio-only and a simple audio-visual case. <s> BIB002 </s> A review of speech-based bimodal recognition <s> C. Low-Level Audio-Visual Fusion <s> We present work on improving the performance of automated speech recognizers by using additional visual information: (lip-/speechreading); achieving error reduction of up to 50%. This paper focuses on different methods of combining the visual and acoustic data to improve the recognition performance. We show this on an extension of an existing state-of-the-art speech recognition system, a modular MS-TDNN. We have developed adaptive combination methods at several levels of the recognition network. Additional information such as estimated signal-to-noise ratio (SNR) is used in some cases. The results of the different combination methods are shown for clean speech and data with artificial noise (white, music, motor). The new combination methods adapt automatically to varying noise conditions making hand-tuned parameters unnecessary. <s> BIB003 </s> A review of speech-based bimodal recognition <s> C. Low-Level Audio-Visual Fusion <s> Consistently high person recognition accuracy is difficult to attain using a single recognition modality. This paper assesses the fusion of voice and outer lip-margin features for person identification. Feature fusion is investigated in the form of audio-visual feature vector concatenation, principal component analysis, and linear discriminant analysis. The paper shows that, under mismatched test and training conditions, audio-visual feature fusion is equivalent to an effective increase in the signal-to-noise ratio of the audio signal. Audio-visual feature vector concatenation is shown to be an effective method for feature combination, and linear discriminant analysis is shown to possess the capability of packing discriminating audio-visual information into fewer coefficients than principal component analysis. The paper reveals a high sensitivity of bimodal person identification to a mismatch between LDA or PCA feature-fusion module and speaker model training noise-conditions. Such a mismatch leads to worse identification accuracy than unimodal identification. <s> BIB004 </s> A review of speech-based bimodal recognition <s> C. Low-Level Audio-Visual Fusion <s> We propose the use of discriminative training by means of the generalized probabilistic descent (GPB) algorithm to estimate hidden Markov model (HMM) stream exponents for audio-visual speech recognition. Synchronized audio and visual features are used to respectively train audio-only and visual-only single-stream HMMs of identical topology by maximum likelihood. A two-stream HMM is then obtained by combining the two single-stream HMMs and introducing exponents that weigh the log-likelihood of each stream. We present the GPD algorithm for stream exponent estimation, consider a possible initialization, and apply it to the single speaker connected letters task of the AT&T bimodal database. We demonstrate the superior performance of the resulting multi-stream HMM to the audio-only, visual-only, and audio-visual single-stream HMMs. <s> BIB005
To the best of the authors' knowledge, data-level fusion of acoustic and visual speech has not been attempted, possibly due to range registration difficulties. Low-level fusion is usually based on input space transformation into a space with less cross correlation and where most of the information is captured in fewer dimensions than the original space. Feature fusion is commonly implemented as a concatenation of acoustic and visual speech feature vectors , BIB004 , BIB002 , BIB005 . This typically gives an input space of higher dimensionality than each unimodal pattern space, and hence, raises the specter of the curse of dimensionality. Consequently, linear or nonlinear transformations, coupled to dimensionality reduction, are often applied to feature vector pairs. Nonlinear transformation is often implemented by a neural network layer BIB001 , , BIB003 , , connected to the primary features or outputs of subnets downstream of the integration layer. PCA and LDA are frequently used for linear transformation of vector pairs BIB004 . Although the Kalman filter can be used for feature fusion , it has not found much use in bimodal recognition. Transformations, such as PCA and LDA, allow dimensionality reduction. However, PCA and LDA may require high volumes of training data for a reliable estimation of the covariance matrices on which they are anchored. LDA often outperforms PCA in recognition tasks. This is because, unlike LDA, PCA does not use discriminative information during parameter estimation. However, the class information embedded in LDA is a set of class means, hence, LDA is ill suited for classes with multiple distribution modes or with confusable means.
A review of speech-based bimodal recognition <s> 1) Information Combination: <s> We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network. <s> BIB001 </s> A review of speech-based bimodal recognition <s> 1) Information Combination: <s> We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIMs). Learning is treated as a maximum likelihood problem; in particular, we present an expectation-maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an online learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. <s> BIB002 </s> A review of speech-based bimodal recognition <s> 1) Information Combination: <s> Methods of integrating audio and visual information in an audiovisual HMM-based ASR system are investigated. Experiments involve discrimination of a set of 22 consonants, with various integration strategies. The role of the visual subsystem is varied; for example, in one run, the subsystem attempts to classify all 22 consonants, while in other runs it attempts only broader classifications. In a second experiment, a new HMM formulation is employed, which incorporates the integration into the HMM at a pre-categorical stage. A single variable parameter allows the relative contribution of audio and visual information to be controlled. This form of integration can be very easily incorporated into existing audio-based continuous speech recognizers. > <s> BIB003 </s> A review of speech-based bimodal recognition <s> 1) Information Combination: <s> Audio-visual person recognition promises higher recognition accuracy than recognition in either domain in isolation. To reach this goal, special attention should be given to the strategies for combining the acoustic and visual sensory modalities. The paper presents a comparative assessment of three decision level data fusion techniques for person identification. Under mismatched training and test noise conditions, Bayesian inference and Dempster-Shafer theory are shown to outperform possibility theory. For these mismatched noise conditions, all three techniques result in compromising integration. Under matched training and test noise conditions, the three techniques yield similar error rates approaching the more accurate of the two sensory modalities, and show signs of leading to enhancing integration at low acoustic noise levels. The paper also shows that automatic identification of identical twins is possible, and that lip margins convey a high level of speaker identity information. <s> BIB004 </s> A review of speech-based bimodal recognition <s> 1) Information Combination: <s> The use of clustering algorithms for decision-level data fusion is proposed. Person authentication results coming from several modalities (e.g., still image, speech), are combined by using fuzzy k-means (FKM) and fuzzy vector quantization (FVQ) algorithms, and a median radial basis function (MRBF) network. The quality measure of the modalities data is used for fuzzification. Two modifications of the FKM and FVQ algorithms, based on a fuzzy vector distance definition, are proposed to handle the fuzzy data and utilize the quality measure. Simulations show that fuzzy clustering algorithms have better performance compared to the classical clustering algorithms and other known fusion algorithms. MRBF has better performance especially when two modalities are combined. Moreover, the use of the quality via the proposed modified algorithms increases the performance of the fusion system. <s> BIB005 </s> A review of speech-based bimodal recognition <s> 1) Information Combination: <s> Biometric person identity authentication is gaining more and more attention. The authentication task performed by an expert is a binary classification problem: reject or accept identity claim. Combining experts, each based on a different modality (speech, face, fingerprint, etc.), increases the performance and robustness of identity authentication systems. In this context, a key issue is the fusion of the different experts for taking a final decision (i.e., accept or reject identity claim). We propose to evaluate different binary classification schemes (support vector machine, multilayer perceptron, C4.5 decision tree, Fisher's linear discriminant, Bayesian classifier) to carry on the fusion. The experimental results show that support vector machines and Bayesian classifier achieve almost the same performances, and both outperform the other evaluated classifiers. <s> BIB006
A common technique for post-categorical fusion is the linear combination of the scores output by the single-modality classifiers , BIB003 . Geometric averaging is also applied at times; a popular approach for combining HMMs is decision fusion implemented as a product of the likelihood of a pair of uncoupled audio and visual HMMs. DTW is sometimes used, along the sequence of feature vectors, to optimize the path through class hypotheses . The combination of classifiers possessing localized expertise can give a better estimate of the decision boundary between classes. This has motivated the development of the mixture of experts (MOE) approach BIB001 . An MOE consists of a parallel configuration of experts, whose outputs are dynamically integrated by the outputs of a trainable gating network. The integration is a weighted sum, whose weights are learned estimates of the correctness of each expert, given the current input. MOEs can be incorporated into a tree-like architecture known as hierarchical mixture of experts (HME) BIB002 . One difficulty with using HMEs is that the selection of appropriate model parameters (number of levels, branching factor of the tree, architecture of experts) may require a good insight into the data or problem space under consideration. Neural networks, Bayesian inference, Dempster-Shafer theory, and possibility theory have also provided frameworks for decision fusion BIB004 , . Integration by neural network is typically implemented by neurons whose weighted inputs are connected to the outputs of single-modality classifiers. Bayesian inference uses Bayes' rule to calculate a posteriori bimodal class probabilities, from the a priori class probabilities and the class conditional probabilities of the observed unimodal classifier outputs. Dempster-Shafer theory of evidence is a generalization of Bayesian probability theory. The bimodal belief in each possible class is computed by applying Dempster's rule of combination to the basic probability assignment in support of each class. Possibility theory is based on fuzzy sets. The bimodal possibility for each class is computed by combining the possibility distributions of classifier outputs. Although possibility theory and Dempster-Shafer theory of evidence are meant to provide more robust frameworks than Bayesian inference, for combining uncertain or imprecise information, comparative assessment on bimodal recognition has shown that this may not be the case BIB004 . 2) Classification: Decision fusion can be formulated as a classification of the pattern of unimodal classifier outputs. The latter are grouped into a vector that is input to another classifier, which yields a classification decision representing the consensus among the unimodal classifiers. A variety of classifiers, acting as a decision fusion mechanism, have been evaluated. It is suggested in that, compared to kNN and decision trees, logistic regression offers the best accuracy and the lowest computational cost during recognition. It is shown in BIB005 that a median radial basis function network outperforms clustering based on fuzzy k-means or fuzzy VQ; the superiority of fuzzy clustering over conventional clustering is also shown. Comparison of the support vector machine (SVM), minimum cost Bayesian classifier, Fisher's linear discriminant, decision trees, and MLP, showed that the MLP gives the worst accuracy BIB006 . The comparison also showed that the SVM and Bayesian classifiers have similar performance and that they outperform the other classifiers. An SVM is a binary classifier which is based on statistical learning theory. SVMs try to maximize generalization capability, even for pattern spaces of high dimensionality. A downside of SVMs is that inappropriate kernel functions can result in poor recognition accuracy; hence, the need for careful selection of kernel functions.
A review of speech-based bimodal recognition <s> 3) Stochastic Modeling of Coupled Time Series: <s> Methods of integrating audio and visual information in an audiovisual HMM-based ASR system are investigated. Experiments involve discrimination of a set of 22 consonants, with various integration strategies. The role of the visual subsystem is varied; for example, in one run, the subsystem attempts to classify all 22 consonants, while in other runs it attempts only broader classifications. In a second experiment, a new HMM formulation is employed, which incorporates the integration into the HMM at a pre-categorical stage. A single variable parameter allows the relative contribution of audio and visual information to be controlled. This form of integration can be very easily incorporated into existing audio-based continuous speech recognizers. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> 3) Stochastic Modeling of Coupled Time Series: <s> We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying two-handed actions. HMMs are perhaps the most successful framework in perceptual computing for modeling and classifying dynamic behaviors, popular because they offer dynamic time warping, a training algorithm and a clear Bayesian semantics. However the Markovian framework makes strong restrictive assumptions about the system generating the signal-that it is a single process having a small number of states and an extremely limited state memory. The single-process model is often inappropriate for vision (and speech) applications, resulting in low ceilings on model performance. Coupled HMMs provide an efficient way to resolve many of these problems, and offer superior training speeds, model likelihoods, and robustness to initial conditions. <s> BIB002 </s> A review of speech-based bimodal recognition <s> 3) Stochastic Modeling of Coupled Time Series: <s> This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate. <s> BIB003 </s> A review of speech-based bimodal recognition <s> 3) Stochastic Modeling of Coupled Time Series: <s> Hidden Markov models (HMMs) have proven to be one of the most widely used tools for learning probabilistic models of time series data. In an HMM, information about the past is conveyed through a single discrete variable—the hidden state. We discuss a generalization of HMMs in which this state is factored into multiple state variables and is therefore represented in a distributed manner. We describe an exact algorithm for inferring the posterior probabilities of the hidden state variables given the observations, and relate it to the forward–backward algorithm for HMMs and to algorithms for more general graphical models. Due to the combinatorial nature of the hidden state representation, this exact algorithm is intractable. As in other intractable systems, approximate inference can be carried out using Gibbs sampling or variational methods. Within the variational framework, we present a structured approximation in which the the state variables are decoupled, yielding a tractable algorithm for learning the parameters of the model. Empirical comparisons suggest that these approximations are efficient and provide accurate alternatives to the exact methods. Finally, we use the structured approximation to model Bach‘s chorales and show that factorial HMMs can capture statistical structure in this data set which an unconstrained HMM cannot. <s> BIB004 </s> A review of speech-based bimodal recognition <s> 3) Stochastic Modeling of Coupled Time Series: <s> We study Markov models whose state spaces arise from the Cartesian product of two or more discrete random variables. We show how to parameterize the transition matrices of these models as a convex combination—or mixture—of simpler dynamical models. The parameters in these models admit a simple probabilistic interpretation and can be fitted iteratively by an Expectation-Maximization (EM) procedure. We derive a set of generalized Baum-Welch updates for factorial hidden Markov models that make use of this parameterization. We also describe a simple iterative procedure for approximately computing the statistics of the hidden states. Throughout, we give examples where mixed memory models provide a useful representation of complex stochastic processes. <s> BIB005
The fusion of acoustic and visual speech can be cast as a probabilistic modeling of coupled time series. Such modeling may capture the potentially useful coupling or conditional dependence between the two modalities. The level of synchronization between acoustic and visual speech varies along an utterance; hence, a flexible framework for modeling the asynchrony is required. Factorial HMMs BIB004 , BIB005 , Boltzman chains and their variants (multistream HMMs BIB003 , and coupled HMMs BIB002 ) are possible stochastic models for the combination of time-coupled modalities (see Fig. 4 ). Factorial HMMs explicitly model intra-process state structure and inter-process coupling; this makes them suitable for bimodal recognition, where each process could correspond to a modality. The state space of a factorial HMM is a Cartesian product of the states of its component HMMs. The modeling of inter-process coupling has the potential of reducing the sensitivity to unwanted intra-process variation during a recognition trial and hence it may enhance recognition robustness. Variants of factorial HMMs have been shown to be superior to conventional HMMs for modeling interacting processes, such as two-handed gestures BIB002 or acoustic and visual speech BIB003 , . A simpler pre-categorical fusion approach for HMM-based classifiers is described in BIB001 . In this approach, the weighted product of the emission probabilities of acoustic and visual speech feature vectors is used during Viterbi decoding for a bimodal discrete HMM.
A review of speech-based bimodal recognition <s> E. Adaptive Fusion <s> We present work on improving the performance of automated speech recognizers by using additional visual information: (lip-/speechreading); achieving error reduction of up to 50%. This paper focuses on different methods of combining the visual and acoustic data to improve the recognition performance. We show this on an extension of an existing state-of-the-art speech recognition system, a modular MS-TDNN. We have developed adaptive combination methods at several levels of the recognition network. Additional information such as estimated signal-to-noise ratio (SNR) is used in some cases. The results of the different combination methods are shown for clean speech and data with artificial noise (white, music, motor). The new combination methods adapt automatically to varying noise conditions making hand-tuned parameters unnecessary. <s> BIB001 </s> A review of speech-based bimodal recognition <s> E. Adaptive Fusion <s> Multisensor data fusion is an emerging technology applied to Department of Defense (DoD) areas such as automated target recognition, battlefield surveillance, and guidance and control of autonomous vehicles, and to non-DoD applications such as monitoring of complex machinery, medical diagnosis, and smart buildings. Techniques for multisensor data fusion are drawn from a wide range of areas including artificial intelligence, pattern recognition, statistical estimation and other areas. This paper provides a tutorial on data fusion, introducing data fusion applications, process models, and identification of applicable techniques. Comments are made on the state-of-the-art in data fusion. <s> BIB002 </s> A review of speech-based bimodal recognition <s> E. Adaptive Fusion <s> The use of clustering algorithms for decision-level data fusion is proposed. Person authentication results coming from several modalities (e.g., still image, speech), are combined by using fuzzy k-means (FKM) and fuzzy vector quantization (FVQ) algorithms, and a median radial basis function (MRBF) network. The quality measure of the modalities data is used for fuzzification. Two modifications of the FKM and FVQ algorithms, based on a fuzzy vector distance definition, are proposed to handle the fuzzy data and utilize the quality measure. Simulations show that fuzzy clustering algorithms have better performance compared to the classical clustering algorithms and other known fusion algorithms. MRBF has better performance especially when two modalities are combined. Moreover, the use of the quality via the proposed modified algorithms increases the performance of the fusion system. <s> BIB003 </s> A review of speech-based bimodal recognition <s> E. Adaptive Fusion <s> The integration of multiple classifiers promises higher classification accuracy and robustness than can be obtained with a single classifier. This paper proposes a new adaptive technique for classifier integration based on a linear combination model. The proposed technique is shown to exhibit robustness to a mismatch between test and training conditions. It often outperforms the most accurate of the fused information sources. A comparison between adaptive linear combination and non-adaptive Bayesian fusion shows that, under mismatched test and training conditions, the former is superior to the latter in terms of identification accuracy and insensitivity to information source distortion. <s> BIB004 </s> A review of speech-based bimodal recognition <s> E. Adaptive Fusion <s> Audiovisual speech recognition involves fusion of the audio and video sensors for phonetic identification. There are three basic ways to fuse data streams for taking a decision such as phoneme identification: data-to-decision, decision-to-decision, and data-to-data. This leads to four possible models for audiovisual speech recognition, that is direct identification in the first case, separate identification in the second one, and two variants of the third early integration case, namely dominant recoding or motor recoding. However, no systematic comparison of these models is available in the literature. We propose an implementation of these four models, and submit them to a benchmark test. For this aim, we use a noisy-vowel corpus tested on two recognition paradigms in which the systems are tested at noise levels higher than those used for learning. In one of these paradigms, the signal-to-noise ratio (SNR) value is provided to the recognition systems, in the other it is not. We also introduce a new criterion for evaluating performances, based on transmitted information on individual phonetic features. In light of the compared performances of the four models with the two recognition paradigms, we discuss the advantages and drawbacks of these models, leading to proposals for data representation, fusion architecture, and control of the fusion process through sensor reliability. <s> BIB005 </s> A review of speech-based bimodal recognition <s> E. Adaptive Fusion <s> This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate. <s> BIB006
In most fusion approaches to pattern recognition, fusion parameters are determined at training time and remain frozen for all subsequent recognition trials. However, optimal fusion requires a good match between the fusion parameters and the factors that affect the input patterns. Nonadaptive data fusion does not guarantee such a match and hence, pattern variation may lead to suboptimal fusion, which may even result in worse accuracy than unimodal recognition BIB002 . Fusion parameters should preferably adapt to changes in recognition conditions. Such dynamic parameters can be based on estimates of signal-to-noise ratio BIB006 , BIB001 , entropy measures , BIB001 , degree of voicing in the acoustic speech , or measures relating to the perceived quality of the unimodal classifier output scores BIB003 , BIB004 , , BIB005 , .
A review of speech-based bimodal recognition <s> B. Recognition Accuracy <s> Methods of integrating audio and visual information in an audiovisual HMM-based ASR system are investigated. Experiments involve discrimination of a set of 22 consonants, with various integration strategies. The role of the visual subsystem is varied; for example, in one run, the subsystem attempts to classify all 22 consonants, while in other runs it attempts only broader classifications. In a second experiment, a new HMM formulation is employed, which incorporates the integration into the HMM at a pre-categorical stage. A single variable parameter allows the relative contribution of audio and visual information to be controlled. This form of integration can be very easily incorporated into existing audio-based continuous speech recognizers. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> B. Recognition Accuracy <s> We present the development of a modular system for flexible human-computer interaction via speech. The speech recognition component integrates acoustic and visual information (automatic lip-reading) improving overall recognition, especially in noisy environments. The image of the lips, constituting the visual input, is automatically extracted from the camera picture of the speaker's face by the lip locator module. Finally, the speaker's face is automatically acquired and followed by the face tracker sub-system. Integration of the three functions results in the first bi-modal speech recognizer allowing the speaker reasonable freedom of movement within a possibly noisy room while continuing to communicate with the computer via voice. Compared to audio-alone recognition, the combined system achieves a 20 to 50 percent error rate reduction for various signal/noise conditions. <s> BIB002 </s> A review of speech-based bimodal recognition <s> B. Recognition Accuracy <s> We present work on improving the performance of automated speech recognizers by using additional visual information: (lip-/speechreading); achieving error reduction of up to 50%. This paper focuses on different methods of combining the visual and acoustic data to improve the recognition performance. We show this on an extension of an existing state-of-the-art speech recognition system, a modular MS-TDNN. We have developed adaptive combination methods at several levels of the recognition network. Additional information such as estimated signal-to-noise ratio (SNR) is used in some cases. The results of the different combination methods are shown for clean speech and data with artificial noise (white, music, motor). The new combination methods adapt automatically to varying noise conditions making hand-tuned parameters unnecessary. <s> BIB003 </s> A review of speech-based bimodal recognition <s> B. Recognition Accuracy <s> Audiovisual speech recognition involves fusion of the audio and video sensors for phonetic identification. There are three basic ways to fuse data streams for taking a decision such as phoneme identification: data-to-decision, decision-to-decision, and data-to-data. This leads to four possible models for audiovisual speech recognition, that is direct identification in the first case, separate identification in the second one, and two variants of the third early integration case, namely dominant recoding or motor recoding. However, no systematic comparison of these models is available in the literature. We propose an implementation of these four models, and submit them to a benchmark test. For this aim, we use a noisy-vowel corpus tested on two recognition paradigms in which the systems are tested at noise levels higher than those used for learning. In one of these paradigms, the signal-to-noise ratio (SNR) value is provided to the recognition systems, in the other it is not. We also introduce a new criterion for evaluating performances, based on transmitted information on individual phonetic features. In light of the compared performances of the four models with the two recognition paradigms, we discuss the advantages and drawbacks of these models, leading to proposals for data representation, fusion architecture, and control of the fusion process through sensor reliability. <s> BIB004 </s> A review of speech-based bimodal recognition <s> B. Recognition Accuracy <s> This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate. <s> BIB005
Bimodal sensor fusion can yield (see Tables I-III): 1) better classification accuracy than either modality ("enhancing fusion," which is the ultimate target of sensor fusion); 2) classification accuracy bounded by the accuracy of each modality ("compromising fusion"); 3) lower classification accuracy than the least accurate modality ("attenuating fusion"). "Enhancing," "compromising," and "attenuating" fusion is a terminology adapted from . When audio-visual fusion results in improved accuracy, it is often observed that intermediate unimodal accuracy gives a higher relative improvement in accuracy than low or high unimodal accuracy , BIB003 . In particular, the "law of diminishing returns" seems to apply when unimodal accuracy changes from intermediate to high. Most findings show that audio-visual fusion can counteract a degradation of acoustic speech BIB002 , BIB005 . Audio-visual fusion is therefore a viable alternative, or complement, to signal processing techniques which try to minimize the effect of acoustic noise degradation on recognition accuracy BIB005 . Although it is sometimes contended that the level at which fusion is performed determines recognition accuracy (see Section IV-B), published results reveal that none of the levels is consistently superior to others. It is very likely that recognition accuracy is not determined solely by the level at which the fusion is applied, but also by the particular fusion technique and training or test regime used . For example, BIB001 shows nearly equal improvement in speech recognition accuracy accruing from either pre-categorical or post-categorical audio-visual fusion. It is also observed in BIB004 that feature fusion and decision fusion yield the same speech recognition accuracy. However, shows that post-categorical (high-level) audio-visual fusion yields better speech recognition accuracy than pre-categorical (low-level) fusion; the worst accuracy is obtained with intermediate-level fusion.
A review of speech-based bimodal recognition <s> C. Performance Assessment Issues <s> The measured performance of any audio-visual processing or analysis technique is inevitably influenced by the database material used in the measurement. Careful consideration should therefore be given to those factors affecting the database content. This paper presents the design issues for the DAVID audio-visual database. First, a number of audio-visual databases are summarised, and the database design issues are discussed. Finally, the content and quality assessment results for DAVID are given. <s> BIB001 </s> A review of speech-based bimodal recognition <s> C. Performance Assessment Issues <s> The primary goal of the M2VTS project is to address the issue of secured access to buildings or multi-media services by the use of automatic person verification based on multimodal strategies (secured access based on speech, face images and other information). This paper presents an overview of the multimodal face database recorded at UCL premises for the purpose of research applications inside the M2VTS project. This database offers synchronized video and speech data as well as image sequences allowing to access multiple views of a face. This material should permit the design and the testing of identification strategies based on speech andro labial analysis, frontal and/or profile face analysis as well as 3-D analysis thanks to the multiple views. The M2VTS Database is available to any non-commercial user on request to the European Language Resource Agency. <s> BIB002 </s> A review of speech-based bimodal recognition <s> C. Performance Assessment Issues <s> Keywords: vision Reference EPFL-CONF-82502 URL: ftp://ftp.idiap.ch/pub/papers/vision/avbpa99.pdf Record created on 2006-03-10, modified on 2017-05-10 <s> BIB003
It is difficult to generalize some findings reported in the bimodal recognition literature and to establish a meaningful comparison of recognition techniques with respect to published recognition accuracy figures. Notably, not all systems are fully automatic; and there are no universally accepted test databases, or performance assessment methodologies. In addition, the majority of reported bimodal recognition figures are for relatively small tasks in terms of vocabulary, grammar, or distribution of speakers. Another problem with most published findings is the lack of rigor in performance assessment methodology. Most results quote empirically determined error rates as point estimates, and findings are often based on inferences made without reference to the confidence intervals of estimates or to the statistical significance of any observed differences. To permit the drawing of objective conclusions from empirical investigations, statistical decision theory should guide the interpretation of results. Most of the reported results are based on data captured under controlled laboratory environments. Most techniques have not been tested in real-world environments. Performance degradation is expected in such environments, particularly if the modeling and fusion techniques are not adaptive. Real-world environments are characterized by a comparatively limited level of control over operational factors such as acoustic and electromagnetic noise, illumination and overall image quality, ruggedness of data capture equipment; as well as head pose, facial appearance, and physiological or emotional state of the speaker. These sources of variability could lead to a mismatch between test and training conditions, and hence, potentially result in degraded recognition accuracy. Although the need for widely accepted benchmark databases has been asserted, there is a paucity of databases for the breadth and depth of research areas in bimodal recognition. Typical limitations of most existing bimodal recognition databases are: small population; narrow phonetic coverage; isolated words; lack of synchronization between audio and video streams; or absence of certain visual cues. There is a pressing need for the development of readily available good benchmark databases. The "DAVID" BIB001 , "M2VTS" BIB002 , "XM2VTSDB" BIB003 , and "ViaVoice Audio-Visual" databases represent positive efforts toward the fulfilment of this need.
A review of speech-based bimodal recognition <s> B. Research Avenues <s> We present an approach to combine the optical motion analysis of the lips and acoustic voice analysis of defined single words for identifying the people speaking. Due to the independence of the different data sources, a higher reliability of the results in comparison with simple optical lip reading is observed. The classification of the preprocessed data is done by synergetic computers, which have attracted increasing attention as robust algorithms for solving industrial classification tasks. Special potential of synergetic computers lies in their close mathematical similarity to self-organized phenomena in nature. Therefore they present a clear perspective for hardware realizations. We propose that the combination of motion and voice analysis offers a possibility for realizing robust access control systems. > <s> BIB001 </s> A review of speech-based bimodal recognition <s> B. Research Avenues <s> This paper describes a new approach for speaker identification based on lipreading. Visual features are extracted from image sequences of the talking face and consist of shape parameters which describe the lip boundary and intensity parameters which describe the grey-level distribution of the mouth area. Intensity information is based on principal component analysis using eigenspaces which deform with the shape model. The extracted parameters account for both, speech dependent and speaker dependent information. We built spatio-temporal speaker models based on these features, using HMMs with mixtures of Gaussians. Promising results were obtained for text dependent and text independent speaker identification tests performed on a small video database. <s> BIB002 </s> A review of speech-based bimodal recognition <s> B. Research Avenues <s> The objective of this work is a computationally efficient method for inferring vocal tract shape trajectories from acoustic speech signals. We use an multilayer perceptron (MLP) to model the vocal tract shape-to-acoustics mapping, then in an analysis-by-synthesis approach, optimise an objective function that includes both the accuracy of the spectrum approximation and the credibility of the vocal tract dynamics. This optimisation carries out gradient descent using backpropagation of derivatives through the MLP. Employing a series of MLPs of increasing order avoids getting trapped in local optima caused by the many-to-one mapping between vocal tract shapes and acoustics. We obtain two orders of magnitude speed increase compared with our previous methods using codebooks and direct optimisation of a synthesiser. <s> BIB003 </s> A review of speech-based bimodal recognition <s> B. Research Avenues <s> This paper deals with a noisy speech enhancement technique based on the fusion of auditory and visual information. We first present the global structure of the system, and then we focus on the tool we used to melt both sources of information. The whole noise reduction system is implemented in the context of vowel transitions corrupted with white noise. A complete evaluation of the system in this context is presented, including distance measures, Gaussian classification scores, and a perceptive test. The results are very promising. <s> BIB004
There is a need for research into bimodal recognition capable of adapting its pattern modeling and fusion knowledge to the prevailing recognition conditions. Further research into the nesting of fusion modules (an approach called meta-fusion in ), also promises improved recognition accuracy and easier handling of complexity. The modeling of the asynchrony between the two channels is also an important research issue. Furthermore, the challenges of speaker adaptation, recognition of spontaneous and continuous speech, require further investigation within the framework of bimodal recognition. Also, to combat data variability, the symbiotic combination of sensor fusion with mature techniques developed for robust unimodal recognition, is also a worthwhile research avenue. In addition, further advances in the synergetic combination of speech with other channels-such as hand gestures and facial expressions-to reduce possible semantic conflicts in spoken communication, are required. Despite the potential gains in accuracy and robustness afforded by bimodal recognition, the latter invariably results in higher storage and computational costs than unimodal recognition. To make the implementation of real-world applications tractable, the development of optimized and robust visual-speech segmentation, feature extraction, or modeling techniques are worthwhile research avenues. Comparative studies of techniques should accompany such developments. Research efforts should also be directed at the joint use of visual and acoustic speech for estimating vocal tract shape, a difficult problem often known as the inversion task BIB003 . This could also be coupled with studies of the joint modeling of the two modalities. The close relationship between the articulatory and phonetic domains suggests that an articulatory representation of speech might be better suited for speech recognition, synthesis, and coding than the conventional spectral acoustic features BIB003 . Previous approaches to the inversion task have relied on acoustic speech alone. The multimodal nature of speech perception suggests that visual speech offers additional information for the acquisition of a physical model of the vocal tract. An investigation, relevant to the inversion task within a bimodal framework, is presented in BIB004 . The effect of visual speech variability on bimodal recognition accuracy has not been investigated as much as its acoustic counterpart . As a result, it is difficult to vouch strongly for the benefit of using visual speech for bimodal recognition in unconstrained visual environments. Studies of the effect of the following factors are called for, particularly in large-scale recognition tasks closely resembling typical real-world tasks: segmentation accuracy; video compression; image noise; occlusion; illumination; speaker pose; and facial expression or paraphernalia (such as facial hair, hats, makeup). A study into the effects of some of these factors is given in . Although the multimodal character of spoken language has long been formally recognized and exploited, multimodal speaker recognition has not received the same attention. The surprisingly high speaker recognition accuracy obtained with visual speech , BIB002 , BIB001 , warrants extensive research on visual speech, either alone or combined with acoustic speech, for speaker recognition. Research is also needed on the potential of bimodal recognition for alleviating the problem of speaker impersonation. The study of how humans integrate audio-visual information could also be beneficial as a basis for developing robust and computationally efficient mechanisms or strategies for bimodal recognition by machine, particularly with regard to feature extraction, classification, and fusion.
Multilateration -- source localization from range difference measurements: Literature survey <s> INTRODUCTION <s> We consider a digital signal processing sensor array system, based on randomly distributed sensor nodes, for surveillance and source localization applications. In most array processing the sensor array geometry is fixed and known and the steering array vector/manifold information is used in beamformation. In this system, array calibration may be impractical due to unknown placement and orientation of the sensors with unknown frequency/spatial responses. This paper proposes a blind beamforming technique, using only the measured sensor data, to form either a sample data or a sample correlation matrix. The maximum power collection criterion is used to obtain array weights from the dominant eigenvector associated with the largest eigenvalue of a matrix eigenvalue problem. Theoretical justification of this approach uses a generalization of Szego's (1958) theory of the asymptotic distribution of eigenvalues of the Toeplitz form. An efficient blind beamforming time delay estimate of the dominant source is proposed. Source localization based on a least squares (LS) method for time delay estimation is also given. Results based on analysis, simulation, and measured acoustical sensor data show the effectiveness of this beamforming technique for signal enhancement and space-time filtering. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> INTRODUCTION <s> In this paper we present an efficient method to perform acoustic source localization and tracking using a distributed network of microphones. In this scenario, there is a trade-off between the localization performance and the expense of resources: in fact, a minimization of the localization error would require to use as many sensors as possible; at the same time, as the number of microphones increases, the cost of the network inevitably tends to grow, while in practical applications only a limited amount of resources is available. Therefore, at each time instant only a subset of the sensors should be enabled in order to meet the cost constraints. We propose a heuristic method for the optimal selection of this subset of microphones, using as distortion metrics the Cramer-Rao lower bound (CRLB) and as cost function the total distance between the selected sensors. The heuristic approach has been compared to an optimal algorithm, which searches the best sensor configuration among the full set of microphones, while satisfying the cost constraint. The proposed heuristic algorithm yields similar performance w.r.t. the full-search procedure, but at a much less computational cost. We show that this method can be used effectively in an acoustic source tracking application. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> INTRODUCTION <s> Microphone arrays sample the sound field in both space and time with the major objective being the extraction of the signal propagating from a desired direction-of-arrival (DOA). In order to reconstruct a spatial sinusoid from a set of discrete samples, the spatial sampling must occur at a rate greater than a half of the wavelength of the sinusoid. This principle has long been adapted to the microphone array context: in order to form an unambiguous beampattern, the spacing between elements in a microphone array needs to conform to this spatial Nyquist criterion. The implicit assumption behind the narrowband beampattern is that one may use linearity and Fourier analysis to describe the response of the array to an arbitrary wideband plane wave. In this paper, this assumption is analyzed. A formula for the broadband beampattern is derived. It is shown that in order to quantify the spatial filtering abilities of a broadband array, the incoming signal's bifrequency spectrum must be taken into account, particularly for nonstationary signals such as speech. Multi-dimensional Fourier analysis is then employed to derive the broadband spatial transform, which is shown to be the limiting case of the broadband beampattern as the number of sensors tends to infinity. The conditions for aliasing in broadband arrays are then determined by analyzing the effect of computing the broadband spatial transform with a discrete spatial aperture. It is revealed that the spatial Nyquist criterion has little importance for microphone arrays. Finally, simulation results show that the well-known steered response power (SRP) method is formulated with respect to stationary signals, and that modifications are necessary to properly form steered beams in nonstationary signal environments. <s> BIB003
As the technologies relying on distributed (individual) sensor arrays (like Internet Of Things) gain momentum, the questions regarding efficient exploitation of such acquired data become more and more important. A valuable information that could be provided by these arrays is the location of the source of the signal, e.g. the RF emitter, or a sound source. In this document, the focus on the latter use case -localizing a sound source, but the reader is reminded that the discussed methods are essentially agnostic to the signal type, as long as the RD measurements are available. Specifically, assume a large aperture array of distributed mono microphones with potentially different gains, as opposed to distributed (compact) microphone arrays. The array geometry is assumed known in advance, and the microphones are already synchronized/syntonized. We further assume that all captured audio streams are readily available (i.e. centralized processing architecture). Lastly, we assume the presence of a direct path (line-of-sight) and the overdetermined setting, i.e. the number of speech sources S is smaller than the number of available microphones in the distributed array M. This scenario imposes several techical constraints: 1. The large aperture size implies significant spatial aliasing, which, along with the relatively small number of microphones, seriously degrades performance of beamforming-based techniques, at least in the narrowband setting BIB003 . The approaches based on distributed beamforming, e.g. BIB002 BIB001 , could still be appealing if they operate in the wideband regime: unfortunately, the literature on beamforming by distributed mono microphones is scarce. 2. The absence of compact arrays prevents the traditional Direction-of-Arrival (DOA) estimation.
Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> For the purpose of localizing a distant noisy target, or, conversely, calibrating a receiving array, the time delays defined by the propagation across the array of the target-generated signal wavefronts are estimated in the presence of sensor-to-sensor-independent array self-noise. The Cramer-Rao matrix bound for the vector delay estimate is derived, and used to show that either properly filtered beamformers or properly filtered systems of multiplier-correlators can be used to provide efficient estimates. The effect of suboptimally filtering the array outputs is discussed. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> The problem of position estimation from time difference of arrival (TDOA) measurements occurs in a range of applications from wireless communication networks to electronic warfare positioning. Correlation analysis of the transmitted signal to two receivers gives rise to one hyperbolic function. With more than two receivers, we can compute more hyperbolic functions, which ideally intersect in one unique point. With TDOA measurement uncertainty, we face a non-linear estimation problem. We suggest and compare a Monte Carlo based method for positioning and a gradient search algorithm using a nonlinear least squares framework. The former has the feature of being easily extended to a dynamic framework where a motion model of the transmitter is included. A small simulation study is presented. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> Time delay estimation has been a research topic of significant practical importance in many fields (radar, sonar, seismology, geophysics, ultrasonics, hands-free communications, etc.). It is a first stage that feeds into subsequent processing blocks for identifying, localizing, and tracking radiating sources. This area has made remarkable advances in the past few decades, and is continuing to progress, with an aim to create processors that are tolerant to both noise and reverberation. This paper presents a systematic overview of the state-of-the-art of time-delay-estimation algorithms ranging from the simple cross-correlation method to the advanced blind channel identification based techniques. We discuss the pros and cons of each individual algorithm, and outline their inherent relationships. We also provide experimental results to illustrate their performance differences in room acoustic environments where reverberation and noise are commonly encountered. <s> BIB003 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> In source localization from time difference of arrival, the impact of the sensor array geometry to the localization accuracy is not well understood yet. A first rigorous analysis can be found in B. Yang and J. Scheuing (2005). It derived sufficient and necessary conditions for optimum array geometry in terms of minimum Cramer-Rao bound. This paper continues the above work and studies theoretically the localization accuracy of two-dimensional sensor arrays. It addresses different issues: a) optimum vs. uniform angular array b) near-field vs. far-field array c) using all sensor pairs vs. those with a common reference sensor as required from spherical position estimators. The paper ends up with some new insights into the sensor placement problem <s> BIB004 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> The accuracy of a source location estimate is very sensitive to the accurate knowledge of receiver locations. This paper performs analysis and develops a solution for locating a moving source using time-difference-of-arrival (TDOA) and frequency-difference-of-arrival (FDOA) measurements in the presence of random errors in receiver locations. The analysis starts with the Crameacuter-Rao lower bound (CRLB) for the problem, and derives the increase in mean-square error (MSE) in source location estimate if the receiver locations are assumed correct but in fact have error. A solution is then proposed that takes the receiver error into account to reduce the estimation error, and it is shown analytically, under some mild approximations, to achieve the CRLB accuracy for far-field sources. The proposed solution is closed form, computationally efficient, and does not have divergence problem as in iterative techniques. Simulations corroborate the theoretical results and the good performance of the proposed method <s> BIB005 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> Microphone arrays sample the sound field in both space and time with the major objective being the extraction of the signal propagating from a desired direction-of-arrival (DOA). In order to reconstruct a spatial sinusoid from a set of discrete samples, the spatial sampling must occur at a rate greater than a half of the wavelength of the sinusoid. This principle has long been adapted to the microphone array context: in order to form an unambiguous beampattern, the spacing between elements in a microphone array needs to conform to this spatial Nyquist criterion. The implicit assumption behind the narrowband beampattern is that one may use linearity and Fourier analysis to describe the response of the array to an arbitrary wideband plane wave. In this paper, this assumption is analyzed. A formula for the broadband beampattern is derived. It is shown that in order to quantify the spatial filtering abilities of a broadband array, the incoming signal's bifrequency spectrum must be taken into account, particularly for nonstationary signals such as speech. Multi-dimensional Fourier analysis is then employed to derive the broadband spatial transform, which is shown to be the limiting case of the broadband beampattern as the number of sensors tends to infinity. The conditions for aliasing in broadband arrays are then determined by analyzing the effect of computing the broadband spatial transform with a discrete spatial aperture. It is revealed that the spatial Nyquist criterion has little importance for microphone arrays. Finally, simulation results show that the well-known steered response power (SRP) method is formulated with respect to stationary signals, and that modifications are necessary to properly form steered beams in nonstationary signal environments. <s> BIB006 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> In this paper, we show that minimization of the statistical dependence using broadband independent component analysis (ICA) can be successfully exploited for acoustic source localization. As the ICA signal model inherently accounts for the presence of several sources and multiple sound propagation paths, the ICA criterion offers a theoretically more rigorous framework than conventional techniques based on an idealized single-path and single-source signal model. This leads to algorithms which outperform other localization methods, especially in the presence of multiple simultaneously active sound sources and under adverse conditions, notably in reverberant environments. Three methods are investigated to extract the time difference of arrival (TDOA) information contained in the filters of a two-channel broadband ICA scheme. While for the first, the blind system identification (BSI) approach, the number of sources should be restricted to the number of sensors, the other methods, the averaged directivity pattern (ADP) and composite mapped filter (CMF) approaches can be used even when the number of sources exceeds the number of sensors. To allow fast tracking of moving sources, the ICA algorithm operates in block-wise batch mode, with a proportionate weighting of the natural gradient to speed up the convergence of the algorithm. The TDOA estimation accuracy of the proposed schemes is assessed in highly noisy and reverberant environments for two, three, and four stationary noise sources with speech-weighted spectral envelopes as well as for moving real speech sources. <s> BIB007 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> In sensor networks, passive localization can be performed by exploiting the received signals of unknown emitters. In this paper, the Time of Arrival (TOA) measurements are investigated. Often, the unknown time of emission is eliminated by calculating the difference between two TOA measurements where Time Difference of Arrival (TDOA) measurements are obtained. In TOA processing, additionally, the unknown time of emission is to be estimated. Therefore, the target state is extended by the unknown time of emission. A comparison is performed investigating the attainable accuracies for localization based on TDOA and TOA measurements given by the Cramer-Rao Lower Bound (CRLB). Using the Maximum Likelihood estimator, some characteristic features of the cost functions are investigated indicating a better performance of the TOA approach. But counterintuitive, Monte Carlo simulations do not support this indication, but show the comparability of TDOA and TOA localization. <s> BIB008 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> We consider the problem of estimating the time differences of arrival (TDOAs) of multiple sources from a two-channel reverberant audio signal. While several clustering-based or angular spectrum-based methods have been proposed in the literature, only relatively small-scale experimental evaluations restricted to either category of methods have been carried out so far. We design and conduct the first large-scale experimental evaluation of these methods and investigate a two-step procedure combining angular spectra and clustering. In addition, we introduce and evaluate five new TDOA estimation methods inspired from signal-to-noise-ratio (SNR) weighting and probabilistic multi-source modeling techniques that have been successful for anechoic TDOA estimation and audio source separation. For 5cm microphone spacing, the best TDOA estimation performance is achieved by one of the proposed SNR-based angular spectrum methods. For larger spacing, a variant of the generalized cross-correlation with phase transform (GCC-PHAT) method performs best. <s> BIB009 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> This tutorial text gives a unifying perspective on machine learning by covering bothprobabilistic and deterministic approaches -which are based on optimization techniques together with the Bayesian inference approach, whose essence liesin the use of a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code. <s> BIB010 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> No knowledge of the sources' ignition times prohibits the Time-of-Flight (TOF) estimation. <s> The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. <s> BIB011
Due to these constraints, the scope of the review is limited to the family of multilateration methods BIB011 based on the TDOA estimation. Fortunately, it has been shown BIB008 that the TOF and TDOA features perform similarly in terms of localization accuracy. Of particular interest is speaker localization within reverberant (indoor) and/or noisy environments. However, the TDOA estimation in these conditions is a challenging problem in its own right (especially in the multisource setting), and is out of the scope of this document -the interested reader may consult appropriate references, e.g. BIB003 BIB009 . Distances between microphones are considered to be of the same order as the distances between microphones and source(s), hence we are in the near-field setting. The general formulation of the time-domain signal y m (t), recorded at the m th microphone is given by the convolutional sum: where a (m) s (t, :) is the time-variant Room Impulse Response (RIR) filter, relating the m th microphone position r m with the s th source position r s , x s (t) is the signal corresponding to the s th source, and n m (t) is the additive noise of the considered microphone. In (1), the microphone gains are absorbed by RIRs. In practice, various simplifications are commonly used instead of the general expression BIB006 . Commonly, a free-field, time-invariant approximation is adopted -in the single source case, it is given as follows BIB002 : where the offset τ m denotes the TOF value, which is proportional to the source-microphone distance. The TDOA, corresponding to the difference in propagation delay between the microphones m and m ′ , with respect to the source s, is defined as τ m . Naturally, the TDOA measurements could be corrupted by various types of noise, which negatively affects the performance of localization algorithms. Another cause of TDOA localization errors is the inexact knowledge of microphone positions. As shown in BIB005 , the Cramér-Rao lower bound (CRB) BIB010 of the source location estimate increases rather quickly with the increase in the microphone position "noise" (fortunately, somewhat less fast in the near field, than in the far field setting). Finally, the localization accuracy also depends on the array geometry BIB004 , which is assumed arbitrary in our case. In homogeneous propagation media, the TDOA values τ where D (s) m denotes the distance between the source s and the microphone m. Thus, the observed RDs also suffer from measurement errors, usually modeled as an additive noise. Note that the observation model (3) defines the two-sheet hyperboloid with respect to r s , with foci in r m and r m ′ . Given the observations {d In the multisource setting, multiple sets of RDs are assumed available, and the localization of each source is to be done independently of the rest. Such measurements could be obtained by multisource TDOA estimation algorithms, e.g. BIB007 . Thus, without loss of generality, we will only discuss the single-source setting (s = 1). In the noiseless case, the number of linearly independent RD observations is equal to M − 1, but considering the full set of observations (of size M(M−1)/2) may be useful for alleviating the harmful effects of measurement noise BIB001 . Usually, the first microphone is chosen to be a reference point: e.g. r 1 = 0, where 0 is the null vector. By denoting r := r s , from (3) we have In the following sections, we discuss different types of source location estimators and methods to calculate them.
Multilateration -- source localization from range difference measurements: Literature survey <s> MAXIMUM LIKELIHOOD ESTIMATION <s> An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives an explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, spherical-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than spherical-interpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamforming and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method. > <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> MAXIMUM LIKELIHOOD ESTIMATION <s> A linear-correction least-squares estimation procedure is proposed for the source localization problem under an additive measurement error model. The method, which can be easily implemented in a real-time system with moderate computational complexity, yields an efficient source location estimator without assuming a priori knowledge of noise distribution. Alternative existing estimators, including likelihood-based, spherical intersection, spherical interpolation, and quadratic-correction least-squares estimators, are reviewed and comparisons of their complexity, estimation consistency and efficiency against the Cramer-Rao lower bound are made. Numerical studies demonstrate that the proposed estimator performs better under many practical situations. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> MAXIMUM LIKELIHOOD ESTIMATION <s> A fundamental requirement of microphone arrays is the capability of instantaneously locating and continuously tracking a speech sound source. The problem is challenging in practice due to the fact that speech is a nonstationary random process with a wideband spectrum, and because of the simultaneous presence of noise, room reverberation, and other interfering speech sources. This Chapter presents an overview of the research and development on this technology in the last three decades. Focusing on a two-stage framework for speech source localization, we survey and analyze the state-of-the-art time delay estimation (TDE) and source localization algorithms. <s> BIB003
Since the observations (4) are non-linear, a statistically efficient estimate (i.e. the one that attains CRB) may not be available. The common approach is to seek the maximum likelihood (ML) estimator instead. Letr andd 1,m ′ (r) denote the estimated source position, and the corresponding RD, respectively: Under the hypothesis that the observation noise is Gaussian, the ML estimator is given as the minimizer of the negative log-likelihood BIB001 BIB002 , and Σ is the covariance matrix of the measurement noise. Note, however that the Gaussian noise assumption for the RD measurements may not hold. For instance, the digital quantization effects can induce RD errors on the order of 2 cm BIB003 . Moreover, the ML estimators are proven to attain the CRB in the asymptotic regime, while the number of microphones (i.e. the number of RDs) is often small. Therefore, non-statistical estimators, such as least squares, are often used in practice instead. Anyhow, in this section we discuss two families of methods proposed for the TDOA maximum likelihood estimation: the ones that aim at solving the non-convex problem (5) directly 1 , and the ones based on convex relaxations.
Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> Taylor-series estimation gives a least-sum-squared-error solution to a set of simultaneous linearized algebraic equations. This method is useful in solving multimeasurement mixed-mode position-location problems typical of many navigational applications. While convergence is not proved, examples show that most problems do converge to the correct solution from reasonable initial guesses. The method also provides the statistical spread of the solution errors. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> Three noniterative techniques are presented for localizing a single source given a set of noisy range-difference measurements. The localization formulas are derived from linear least-squares "equation error" minimization, and in one case the maximum likelihood bearing estimate is approached. Geometric interpretations of the equation error norms minimized by the three methods are given, and the statistical performances of the three methods are compared via computer simulation. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives an explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, spherical-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than spherical-interpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamforming and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method. > <s> BIB003 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> This paper studies the problem of sound source localization in a distributed wireless sensor network formed by mobile general purpose computing and communication devices with audio I/O capabilities. In contrast to well understood localization methods based on dedicated microphone arrays, in our setting sound localization is performed using a sparse array of arbitrary placed sensors (in a typical scenario, localization is performed by several laptops/PDAs co-located in a room). Therefore any far-field assumptions are no longer valid in this situation. Additionally, localization algorithm's performance is affected by uncertainties in sensor position and errors in A/D synchronization. The proposed source localization algorithm consists of two steps. In the first step, time differences of arrivals (TDOAs) are estimated for the microphone pairs, and in the second step the maximum likelihood (ML) estimation for the source position is performed. We evaluate the Cramer-Rao bound (CRB) on the variance of the location estimation and compare it with simulations and experimental results. We also discuss the effects of distributed array geometry and errors in sensor positions on the performance of the localization algorithm. The performances of the system are likely to be limited by errors in sensor locations and increase when the microphones have a large aperture with respect to the source. <s> BIB004 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> In source localization from time difference of arrival, the impact of the sensor array geometry to the localization accuracy is not well understood yet. A first rigorous analysis can be found in B. Yang and J. Scheuing (2005). It derived sufficient and necessary conditions for optimum array geometry in terms of minimum Cramer-Rao bound. This paper continues the above work and studies theoretically the localization accuracy of two-dimensional sensor arrays. It addresses different issues: a) optimum vs. uniform angular array b) near-field vs. far-field array c) using all sensor pairs vs. those with a common reference sensor as required from spherical position estimators. The paper ends up with some new insights into the sensor placement problem <s> BIB005 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> Sensors at separate locations measuring either the time difference of arrival (TDOA) or time of arrival (TOA) of the signal from an emitter can determine its position as the intersection of hyperbolae for TDOA and of circles for TOA. Because of measurement noise, the nonlinear localization equations become inconsistent; and the hyperbolae or circles no longer intersect at a single point. It is now necessary to find an emitter position estimate that minimizes its deviations from the true position. Methods that first linearize the equations and then perform gradient searches for the minimum suffer from initial condition sensitivity and convergence difficulty. Starting from the maximum likelihood (ML) function, this paper derives a closed-form approximate solution to the ML equations. When there are three sensors on a straight line, it also gives an exact ML estimate. Simulation experiments have demonstrated that these algorithms are near optimal, attaining the theoretical lower bound for different geometries, and are superior to two other closed form linear estimators. <s> BIB006 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> This paper proposes a new type of algorithm aimed at finding the traditional maximum-likelihood (TML) estimate of the position of a target given time-difference-of-arrival (TDOA) information, contaminated by noise. The novelty lies in the fact that a performance index, akin to but not identical with that in maximum likelihood (ML), is a minimized subject to a number of constraints, which flow from geometric constraints inherent in the underlying problem. The minimization is in a higher dimensional space than for TML, and has the advantage that the algorithm can be very straightforwardly and systematically initialized. Simulation evidence shows that failure to converge to a solution of the localization problem near the true value is less likely to occur with this new algorithm than with TML. This makes it attractive to use in adverse geometric situations. <s> BIB007 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> We consider the problem of target localization by a network of passive sensors. When an unknown target emits an acoustic or a radio signal, its position can be localized with multiple sensors using the time difference of arrival (TDOA) information. In this paper, we consider the maximum likelihood formulation of this target localization problem and provide efficient convex relaxations for this nonconvex optimization problem. We also propose a formulation for robust target localization in the presence of sensor location errors. Two Cramer-Rao bounds are derived corresponding to situations with and without sensor node location errors. Simulation results confirm the efficiency and superior performance of the convex relaxation approach as compared to the existing least squares based approach when large sensor node location errors are present. <s> BIB008 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> We consider the source localization problem using time-difference-of-arrival (TDOA) measurements in sensor networks. The maximum likelihood (ML) estimation of the source location can be cast as a nonlinear/nonconvex optimization problem, and its global solution is hardly obtained. In this paper, we resort to the Monte Carlo importance sampling (MCIS) technique to find an approximate global solution to this problem. To obtain an efficient importance function that is used in the technique, we construct a Gaussian distribution and choose its probability density function (pdf) as the importance function. In this process, an initial estimate of the source location is required. We reformulate the problem as a nonlinear robust least squares (LS) problem, and relax it as a second-order cone programming (SOCP), the solution of which is used as the initial estimate. Simulation results show that the proposed method can achieve the Cramer-Rao bound (CRB) accuracy and outperforms several existing methods. <s> BIB009 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> In sensor networks, passive localization can be performed by exploiting the received signals of unknown emitters. In this paper, the Time of Arrival (TOA) measurements are investigated. Often, the unknown time of emission is eliminated by calculating the difference between two TOA measurements where Time Difference of Arrival (TDOA) measurements are obtained. In TOA processing, additionally, the unknown time of emission is to be estimated. Therefore, the target state is extended by the unknown time of emission. A comparison is performed investigating the attainable accuracies for localization based on TDOA and TOA measurements given by the Cramer-Rao Lower Bound (CRLB). Using the Maximum Likelihood estimator, some characteristic features of the cost functions are investigated indicating a better performance of the TOA approach. But counterintuitive, Monte Carlo simulations do not support this indication, but show the comparability of TDOA and TOA localization. <s> BIB010 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> This paper proposes two methods to reduce the bias of the well-known algebraic explicit solution (Chan and Ho, "A simple and efficient estimator for hyperbolic location," IEEE Trans. Signal Process., vol. 42, pp. 1905-1915, Aug. 1994) for source localization using TDOA. Bias of a source location estimate is significant when the measurement noise is large and the geolocation geometry is poor. Bias also dominates performance when multiple times of independent measurements are available such as in UWB localization or in target tracking. The paper starts by deriving the bias of the source location estimate from Chan and Ho. The bias is found to be considerably larger than that of the Maximum Likelihood Estimator. Two methods, called BiasSub and BiasRed, are developed to reduce the bias. The BiasSub method subtracts the expected bias from the solution of Chan and Ho's work, where the expected bias is approximated by the theoretical bias using the estimated source location and noisy data measurements. The BiasRed method augments the equation error formulation and imposes a constraint to improve the source location estimate. The BiasSub method requires the exact knowledge of the noise covariance matrix and BiasRed only needs the structure of it. Analysis shows that both methods reduce the bias considerably and achieve the CRLB performance for distant source when the noise is Gaussian and small. The BiasSub method can nearly eliminate the bias and the BiasRed method is able to lower the bias to the same level as the Maximum Likelihood Estimator. The BiasRed method is extended for TDOA and FDOA positioning. Simulations corroborate the performance of the proposed methods. <s> BIB011 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> We present a general scheme for analyzing the performance of a generic localization algorithm for multilateration (MLAT) systems (or for other distributed sensor, passive localization technology). MLAT systems are used for airport surface surveillance and are based on time difference of arrival measurements of Mode S signals (replies and 1,090 MHz extended squitter, or 1090ES). In the paper, we propose to consider a localization algorithm as composed of two components: a data model and a numerical method, both being properly defined and described. In this way, the performance of the localization algorithm can be related to the proper combination of statistical and numerical performances. We present and review a set of data models and numerical methods that can describe most localization algorithms. We also select a set of existing localization algorithms that can be considered as the most relevant, and we describe them under the proposed classification. We show that the performance of any localization algorithm has two components, i.e., a statistical one and a numerical one. The statistical performance is related to providing unbiased and minimum variance solutions, while the numerical one is related to ensuring the convergence of the solution. Furthermore, we show that a robust localization (i.e., statistically and numerically efficient) strategy, for airport surface surveillance, has to be composed of two specific kind of algorithms. Finally, an accuracy analysis, by using real data, is performed for the analyzed algorithms; some general guidelines are drawn and conclusions are provided. <s> BIB012 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> This tutorial text gives a unifying perspective on machine learning by covering bothprobabilistic and deterministic approaches -which are based on optimization techniques together with the Bayesian inference approach, whose essence liesin the use of a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code. <s> BIB013 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Direct methods <s> The problem of estimating receiver or sender node positions from measured receiver-sender distances is a key issue in different applications such as microphone array calibration, radio antenna array calibration, mapping and positioning using UWB or using round-trip-time measurements between mobile phones and WiFi-units. In this paper we address the problem of optimally estimating a receiver position given a number of distance measurements to known sender positions, so called trilateration. We show that this problem can be rephrased as an eigenvalue problem. We also address different error models and the multilateration setting where an additional offset is also unknown, and show that these problems can be modeled using the same framework. <s> BIB014
The problem (5) is difficult to solve directly, due to nonlinear dependence of the RDs {d 1,m ′ (r)} on the position variabler. Early approaches, based on iterative schemes, such as linearized gradient descent and LevenbergâȂŞMarquardt algorithm BIB001 BIB004 , suffer from sensitivity to initialization, increased computational complexity and ill-conditioning (though the latter could be improved using regularization techniques BIB012 ). The method proposed in BIB007 exploits correlation among noises within different TDOA measurements, and defines a constrained ML cost function tackled by a Newton-like algorithm. According to simulation results, it is more robust to adverse localization geometries BIB005 than BIB001 , or the least squares methods BIB002 . Another advantage of this method is the straightforward way to provide the initial estimate (however, as usual, global convergence cannot be guaranteed). In the pioneering article BIB003 , the authors proposed a closed-form, two-stage approach, that approximates the solution of BIB010 . Firstly, the (weighted) unconstrained least-squares solution (to be explained in the next section) is computed, which is then improved by exploiting the relation between the estimates of the position vector and its magnitude. The minimal number of microphones, due to the unconstrained LS estimation is 5 in three dimensions. It has been shown BIB003 that the method attains the CRB at high to moderate Signal-to-Noise-Ratios (SNRs). Unfortunately, it suffers from a nonlinear "threshold effect" -its performance quickly deteriorates at low SNRs. Instead, an approximate, but more stable version of this ML method has been proposed in BIB006 . In addition, the estimator BIB003 comes with a large bias BIB012 , which cannot be reduced by increasing the amount of measurements. This bias has been theoretically evaluated and reduced in BIB011 . The method proposed in BIB009 uses Monte Carlo importance sampling techniques BIB013 to approximate the solution of the problem BIB010 . As an initial point, it uses the estimate computed by a convex relaxation method. According to simulation experiments, its localization performance is on pair with the convex method BIB008 , but the computational complexity is much lower. A very recent article BIB014 proposes the linearization approach that casts the original into an eigenvalue problem, which can be solved optimally in closed form. Additionally, the authors propose an Iterative Reweighted Least Squares scheme that approximates the ML estimate for different noise distributions.
Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> Taylor-series estimation gives a least-sum-squared-error solution to a set of simultaneous linearized algebraic equations. This method is useful in solving multimeasurement mixed-mode position-location problems typical of many navigational applications. While convergence is not proved, examples show that most problems do converge to the correct solution from reasonable initial guesses. The method also provides the statistical spread of the solution errors. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives an explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, spherical-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than spherical-interpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamforming and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method. > <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> A disadvantage of the SDP (semidefinite programming) relaxation method for quadratic and/or combinatorial optimization problems lies in its expensive computational cost. This paper proposes a SOCP (second-order-cone programming) relaxation method, which strengthens the lift-and-project LP (linear programming) relaxation method by adding convex quadratic valid inequalities for the positive semidefinite cone involved in the SDP relaxation. Numerical experiments show that our SOCP relaxation is a reasonable compromise between the effectiveness of the SDP relaxation and the low computational cost of the lift-and-project LP relaxation. <s> BIB003 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics. <s> BIB004 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> A common technique for passive source localization is to utilize the range-difference (RD) measurements between the source and several spatially separated sensors. The RD information defines a set of hyperbolic equations from which the source position can be calculated with the knowledge of the sensor positions. Under the standard assumption of Gaussian distributed RD measurement errors, it is well known that the maximum-likelihood (ML) position estimation is achieved by minimizing a multimodal cost function which corresponds to a difficult task. In this correspondence, we propose to approximate the nonconvex ML optimization by relaxing it to a convex optimization problem using semidefinite programming. A semidefinite relaxation RD-based positioning algorithm, which makes use of the admissible source position information, is proposed and its estimation performance is contrasted with the two-step weighted least squares method and nonlinear least squares estimator as well as Cramer-Rao lower bound. <s> BIB005 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> We consider the problem of target localization by a network of passive sensors. When an unknown target emits an acoustic or a radio signal, its position can be localized with multiple sensors using the time difference of arrival (TDOA) information. In this paper, we consider the maximum likelihood formulation of this target localization problem and provide efficient convex relaxations for this nonconvex optimization problem. We also propose a formulation for robust target localization in the presence of sensor location errors. Two Cramer-Rao bounds are derived corresponding to situations with and without sensor node location errors. Simulation results confirm the efficiency and superior performance of the convex relaxation approach as compared to the existing least squares based approach when large sensor node location errors are present. <s> BIB006 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Convex relaxations <s> This paper investigates the source localization problem based on time difference of arrival (TDOA) measurements in the presence of random noises in both the TDOA and sensor location measurements. We formulate the localization problem as a constrained weighted least squares problem which is an indefinite quadratically constrained quadratic programming problem. Owing to the non-convex nature of this problem, it is difficult to obtain a global solution. However, by exploiting the hidden convexity of this problem, we reformulate it to a convex optimization problem. We further derive a primal-dual interior point algorithm to reach a global solution efficiently. The proposed method is shown to analytically achieve the Cramer-Rao lower bound (CRLB) under some mild approximations. Moreover, when the location geometry is not desirable, the proposed algorithm can efficiently avoid the ill-conditioning problem. Simulations are used to corroborate the theoretical results which demonstrate the good performance, robustness and high efficiency of the proposed method. HighlightsWe explore the source localization problem using noisy TDOA measurements in the presence of random sensor position errors.By exploiting the hidden convexity, the formulated non-convex localization problem is transformed to a convex optimization problem.The proposed convex localization algorithm analytically achieves the CRLB under some mild approximations.The proposed algorithm can efficiently avoid the ill-conditioning problem. <s> BIB007
Another important line of work are the methods based on convex relaxations of ML estimation problems. In other words, the original problem is approximated by a convex one BIB004 , which is usually far easier to solve. Two families of approaches dominate this field: methods based on semidefinite programming (SDP), and the ones relaxing the original task into a second-order cone optimization problem (SOCP). In the former, the non-convex quadratic problem (5) is first lifted such that the non-convexity appears as a rank 1 constraint, which is then substituted by a positive semidefinite one . Lifting is a problem reformulation by variable substitution G = gg T , where g is the original optimization variable (the term lifting is used to emphasize that the problem is now defined in a high-dimensional space). On the other hand, solving the SDP optimization problems can be computationally expensive, and the SOCP framework has been proposed as a compromise between the approximation quality and computational complexity (cf. BIB003 for technical details). One of the first convex relaxation approaches for the TDOA localization is BIB005 , based on SDP. The algorithm requires the knowledge of the microphone closest to the source, in order to ensure that all RDs (with that microphone as a reference) are positive. The article BIB006 discusses three convex relaxation methods. The first one, based on SOCP relaxation is computationally efficient, but restricts the solution to the convex hull BIB004 of microphone positions. The other two SDP-based remove this restriction, but are somewhat more computationally demanding. In addition, one of these is the robust version -it minimizes the worst-case error due to imprecise microphone locations. The latter requires tuning of several hyperparameters, among which is the variance of the microphone positioning error. All three versions are based on the white Gaussian noise model for the TDOA measurements, however, whithening could be applied in order to support the correlated noise case. However, the SDP solutions are not the final output of the algorithms, but are used to initialize nonlinear iterative scheme, such as BIB001 . Interestingly, a recent article BIB007 has shown that the ideas of the direct approach BIB002 and the constrained least-squares approach 2.1 could be mixed together. Moreover, the cost function can be casted to a convex problem, for which an interiorpoint method has been proposed. However, in practice, it is a compound algorithm which iteratively solves a sequence of convex problems in order to re-calculate a weighting matrix dependant on the estimated source position. The accuracy depends on the number of iterations, which, in turn, increases computational complexity. As for BIB002 , it requires 5 microphones for the 3D localization.
Multilateration -- source localization from range difference measurements: Literature survey <s> LEAST-SQUARES ESTIMATION <s> This paper proposes a new type of algorithm aimed at finding the traditional maximum-likelihood (TML) estimate of the position of a target given time-difference-of-arrival (TDOA) information, contaminated by noise. The novelty lies in the fact that a performance index, akin to but not identical with that in maximum likelihood (ML), is a minimized subject to a number of constraints, which flow from geometric constraints inherent in the underlying problem. The minimization is in a higher dimensional space than for TML, and has the advantage that the algorithm can be very straightforwardly and systematically initialized. Simulation evidence shows that failure to converge to a solution of the localization problem near the true value is less likely to occur with this new algorithm than with TML. This makes it attractive to use in adverse geometric situations. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> LEAST-SQUARES ESTIMATION <s> The iterative least squares method, or Gauss-Newton method, is a standard algorithm for solving general nonlinear systems of equations, but it is often said to be unsuited for mobile positioning with e.g. ranges or range differences or angle-of-arrival measurements. Instead, various closed-form methods have been proposed and are constantly being reinvented for the problem, claiming to outperform Gauss-Newton. We list some common conceptual and computation pitfalls for closedform solvers, and present an extensive comparison of different closed-form solvers against a properly implemented Gauss-Newton solver. We give all the algorithms in similar notations and implementable form and a couple of novel closed-form methods and implementation details. The Gauss-Newton method strengthened with a regularisation term is found to be as accurate as any of the closed-form methods, and to have comparable computation load, while being simpler to implement and avoiding most of the pitfalls. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> LEAST-SQUARES ESTIMATION <s> This paper proposes two methods to reduce the bias of the well-known algebraic explicit solution (Chan and Ho, "A simple and efficient estimator for hyperbolic location," IEEE Trans. Signal Process., vol. 42, pp. 1905-1915, Aug. 1994) for source localization using TDOA. Bias of a source location estimate is significant when the measurement noise is large and the geolocation geometry is poor. Bias also dominates performance when multiple times of independent measurements are available such as in UWB localization or in target tracking. The paper starts by deriving the bias of the source location estimate from Chan and Ho. The bias is found to be considerably larger than that of the Maximum Likelihood Estimator. Two methods, called BiasSub and BiasRed, are developed to reduce the bias. The BiasSub method subtracts the expected bias from the solution of Chan and Ho's work, where the expected bias is approximated by the theoretical bias using the estimated source location and noisy data measurements. The BiasRed method augments the equation error formulation and imposes a constraint to improve the source location estimate. The BiasSub method requires the exact knowledge of the noise covariance matrix and BiasRed only needs the structure of it. Analysis shows that both methods reduce the bias considerably and achieve the CRLB performance for distant source when the noise is Gaussian and small. The BiasSub method can nearly eliminate the bias and the BiasRed method is able to lower the bias to the same level as the Maximum Likelihood Estimator. The BiasRed method is extended for TDOA and FDOA positioning. Simulations corroborate the performance of the proposed methods. <s> BIB003
Largely due to computational convenience, the least-squares (LS) estimation is often a preferred parameter estimation approach. It is noteworthy that all LS approaches optimize a somewhat "artificial" estimation objective, which can induce large errors in very low SNR conditions, when the measurement noise is not white, and/or for some adverse array geometries BIB001 BIB002 BIB003 . Three types of cost functions are discussed: hyperbolic, spherical and conic LS.
Multilateration -- source localization from range difference measurements: Literature survey <s> Hyperbolic LS <s> A derivation of the principal algorithms and an analysis of the performance of the two most important passive location systems for stationary transmitters, hyperbolic location systems and directionfinding location systems, are presented. The concentration ellipse, the circular error probability, and the geometric dilution of precision are defined and related to the location-system and received-signal characteristics. Doppler and other passive location systems are briefly discussed. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Hyperbolic LS <s> The problem of position estimation from time difference of arrival (TDOA) measurements occurs in a range of applications from wireless communication networks to electronic warfare positioning. Correlation analysis of the transmitted signal to two receivers gives rise to one hyperbolic function. With more than two receivers, we can compute more hyperbolic functions, which ideally intersect in one unique point. With TDOA measurement uncertainty, we face a non-linear estimation problem. We suggest and compare a Monte Carlo based method for positioning and a gradient search algorithm using a nonlinear least squares framework. The former has the feature of being easily extended to a dynamic framework where a motion model of the transmitter is included. A small simulation study is presented. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Hyperbolic LS <s> Time delay estimation has been a research topic of significant practical importance in many fields (radar, sonar, seismology, geophysics, ultrasonics, hands-free communications, etc.). It is a first stage that feeds into subsequent processing blocks for identifying, localizing, and tracking radiating sources. This area has made remarkable advances in the past few decades, and is continuing to progress, with an aim to create processors that are tolerant to both noise and reverberation. This paper presents a systematic overview of the state-of-the-art of time-delay-estimation algorithms ranging from the simple cross-correlation method to the advanced blind channel identification based techniques. We discuss the pros and cons of each individual algorithm, and outline their inherent relationships. We also provide experimental results to illustrate their performance differences in room acoustic environments where reverberation and noise are commonly encountered. <s> BIB003
The goal is to minimize the sum of squared distances ǫ h between the true and estimated RDs: BIB003 which is analogous to the ML estimation problem (5) for Σ = I, with I being the identity matrix. Thus, in the case of white Gaussian noise, the hyperbolic LS solution coincides with the ML solution. Otherwise, solving (6) comes down to finding the pointr whose distance to all hyperboloids d 1,m ′ , defined in (4), is minimal. However, the hyperbolic LS problem is also non-convex, and its global solution cannot be guaranteed. Instead, local minimizers are found by iterative procedures, such as (nonlinear) gradient descent or particle filtering BIB001 BIB002 . Obviously, the quality of the output result of such algorithms depends on their initial estimates, the choice of which is usually not mathematical, but rather application-based.
Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> Three noniterative techniques are presented for localizing a single source given a set of noisy range-difference measurements. The localization formulas are derived from linear least-squares "equation error" minimization, and in one case the maximum likelihood bearing estimate is approached. Geometric interpretations of the equation error norms minimized by the three methods are given, and the statistical performances of the three methods are compared via computer simulation. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives an explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, spherical-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than spherical-interpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamforming and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method. > <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> A linear-correction least-squares estimation procedure is proposed for the source localization problem under an additive measurement error model. The method, which can be easily implemented in a real-time system with moderate computational complexity, yields an efficient source location estimator without assuming a priori knowledge of noise distribution. Alternative existing estimators, including likelihood-based, spherical intersection, spherical interpolation, and quadratic-correction least-squares estimators, are reviewed and comparisons of their complexity, estimation consistency and efficiency against the Cramer-Rao lower bound are made. Numerical studies demonstrate that the proposed estimator performs better under many practical situations. <s> BIB003 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> This paper considers the problem of locating a radiating source from range-difference observations. This specific source localization problem has received significant attention for at least 20 years, and several solutions have been proposed to solve it either approximately or exactly. However, some of these solutions have not been described clearly, and confusions seem to persist. This paper aims to clarify and streamline the most successful solutions. It introduces a new closed-form approximate solution, and briefly comments on the related problem of source localization from energy or range measurements <s> BIB004 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> We consider least squares (LS) approaches for locating a radiating source from range measurements (which we call R-LS) or from range-difference measurements (RD-LS) collected using an array of passive sensors. We also consider LS approaches based on squared range observations (SR-LS) and based on squared range-difference measurements (SRD-LS). Despite the fact that the resulting optimization problems are nonconvex, we provide exact solution procedures for efficiently computing the SR-LS and SRD-LS estimates. Numerical simulations suggest that the exact SR-LS and SRD-LS estimates outperform existing approximations of the SR-LS and SRD-LS solutions as well as approximations of the R-LS and RD-LS solutions which are based on a semidefinite relaxation. <s> BIB005 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> In a recent letter by Gillette and Silverman, a linear closed-form algorithm (the GS algorithm) for source localization from time-differences of arrival is proposed. It is claimed that ldquoit is so simple that we were surprised that, until very recently, there have been no other solutions similar to ours.rdquo This comment has two objectives. We point out imprecisions of statement, and we give additional references of closely related works, some of them presenting the same results. <s> BIB006 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> Microphone arrays sample the sound field in both space and time with the major objective being the extraction of the signal propagating from a desired direction-of-arrival (DOA). In order to reconstruct a spatial sinusoid from a set of discrete samples, the spatial sampling must occur at a rate greater than a half of the wavelength of the sinusoid. This principle has long been adapted to the microphone array context: in order to form an unambiguous beampattern, the spacing between elements in a microphone array needs to conform to this spatial Nyquist criterion. The implicit assumption behind the narrowband beampattern is that one may use linearity and Fourier analysis to describe the response of the array to an arbitrary wideband plane wave. In this paper, this assumption is analyzed. A formula for the broadband beampattern is derived. It is shown that in order to quantify the spatial filtering abilities of a broadband array, the incoming signal's bifrequency spectrum must be taken into account, particularly for nonstationary signals such as speech. Multi-dimensional Fourier analysis is then employed to derive the broadband spatial transform, which is shown to be the limiting case of the broadband beampattern as the number of sensors tends to infinity. The conditions for aliasing in broadband arrays are then determined by analyzing the effect of computing the broadband spatial transform with a discrete spatial aperture. It is revealed that the spatial Nyquist criterion has little importance for microphone arrays. Finally, simulation results show that the well-known steered response power (SRP) method is formulated with respect to stationary signals, and that modifications are necessary to properly form steered beams in nonstationary signal environments. <s> BIB007 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Spherical LS <s> We consider the source localization problem using time-difference-of-arrival (TDOA) measurements in sensor networks. The maximum likelihood (ML) estimation of the source location can be cast as a nonlinear/nonconvex optimization problem, and its global solution is hardly obtained. In this paper, we resort to the Monte Carlo importance sampling (MCIS) technique to find an approximate global solution to this problem. To obtain an efficient importance function that is used in the technique, we construct a Gaussian distribution and choose its probability density function (pdf) as the importance function. In this process, an initial estimate of the source location is required. We reformulate the problem as a nonlinear robust least squares (LS) problem, and relax it as a second-order cone programming (SOCP), the solution of which is used as the initial estimate. Simulation results show that the proposed method can achieve the Cramer-Rao bound (CRB) accuracy and outperforms several existing methods. <s> BIB008
By squaring the idealized RD measurement expression (4), followed by some simple algebraic manipulations, we have The interest of this operation is in decoupling of the position vector and its magnitude, which are to be replaced by their estimatesr andD := r , respectively. The goal now becomes driving the sum of left hand sides (for all microphones) to zero: which leads to the following (compactly written) constrained optimization problem BIB005 : , andĉ BIB007 denotes the first entry of the column vectorĉ. In the literature, the problem above is tackled as: Unconstrained LS : by ignoring the constraints relating the position estimater and its magnitudeD, the problem (7) admits a closed-form solutionĉ As pointed in BIB004 BIB006 , several well-known estimation algorithms BIB001 actually yield the unconstrained LS estimate. The minimum of M = 5 microphones (i.e. four RD measurements), in three dimensions, are required in order for Φ T Φ −1 to be an invertible matrix. Constrained LS : While the unconstrained LS is simple and computationally efficient, its estimate is known to have a large variance compared to the CRB BIB006 , hence the interest for solving the constrained problem. Unfortunately, (8) is non-convex due to quadratic constraints. To directly incorporate the constraint(s), a Lagrangianbased iterative method has been proposed in BIB003 , albeit without any performance guarantees. Later, in their seminal paper BIB005 , Beck and Stoica provided a closed-form global solution of the problem, and demonstrated that it gives orders of magnitude more accurate solution (at an increased computational cost) than the unconstrained LS estimator. Moreover, the results in BIB008 indicate that it is generally more accurate than the two-stage ML solution BIB002 .
Multilateration -- source localization from range difference measurements: Literature survey <s> Conic LS <s> An array of n sensors at known locations receives the signal from an emitter whose location is desired. By measuring the time differences of arrival (TDOAs) between pairs of sensors, the range differences (RDs) are available and it becomes possible to compute the emitter location. Traditionally geometric solutions have been based on intersections of hyperbolic lines of position (LOPs). Each measured TDOA provides one hyperbolic LOP. In the absence of measurement noise, the RDs taken around any closed circuit of sensors add to zero. A bivector is introduced from exterior algebra such that when noise is present, the measured bivector of RDs is generally infeasible in that there does not correspond any actual emitter position exhibiting them. A circuital sum trivector is also introduced to represent the infeasibility; a null trivector implies a feasible RD bivector. A 2-step RD Emitter Location algorithm is proposed which exploits this implicit structure. Given the observed noisy RD bivector /spl Delta/, (1) calculate the nearest feasible RD bivector /spl Delta//spl circ/, and (2) calculate the nearest point to the (/sub 3//sup n/) planes of position, one for each of the triads of elements of /spl Delta//spl circ/. Both algorithmic steps are least squares (LS) and finite. Numerical comparisons in simulation show a substantial improvement in location error variances. <s> BIB001 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Conic LS <s> We consider a digital signal processing sensor array system, based on randomly distributed sensor nodes, for surveillance and source localization applications. In most array processing the sensor array geometry is fixed and known and the steering array vector/manifold information is used in beamformation. In this system, array calibration may be impractical due to unknown placement and orientation of the sensors with unknown frequency/spatial responses. This paper proposes a blind beamforming technique, using only the measured sensor data, to form either a sample data or a sample correlation matrix. The maximum power collection criterion is used to obtain array weights from the dominant eigenvector associated with the largest eigenvalue of a matrix eigenvalue problem. Theoretical justification of this approach uses a generalization of Szego's (1958) theory of the asymptotic distribution of eigenvalues of the Toeplitz form. An efficient blind beamforming time delay estimate of the dominant source is proposed. Source localization based on a least squares (LS) method for time delay estimation is also given. Results based on analysis, simulation, and measured acoustical sensor data show the effectiveness of this beamforming technique for signal enhancement and space-time filtering. <s> BIB002 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Conic LS <s> This paper presents a synthesizable VHDL model of a three-dimensional hyperbolic positioning system algorithm. The algorithm obtains an exact solution for the three-dimensional location of a mobile given the locations of four fixed stations (like a global positioning system [GPS] satellite or a base station in a cell) and the signal time of arrival (TOA) from the mobile to each station. The detailed derivation of the steps required in the algorithm is presented. A VHDL model of the algorithm was implemented and simulated using the IEEE numeric_std package. Signals were described by a 32-bit vector. Simulation results predict location of the mobile is off by 1 m for best case and off by 36 m for worst case. A Cþþ program using real numbers was used as a benchmark for the accuracy and precision of the VHDL model. The model can be easily synthesized for low power hardware implementation. <s> BIB003 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Conic LS <s> The accuracy of a source location estimate is very sensitive to the accurate knowledge of receiver locations. This paper performs analysis and develops a solution for locating a moving source using time-difference-of-arrival (TDOA) and frequency-difference-of-arrival (FDOA) measurements in the presence of random errors in receiver locations. The analysis starts with the Crameacuter-Rao lower bound (CRLB) for the problem, and derives the increase in mean-square error (MSE) in source location estimate if the receiver locations are assumed correct but in fact have error. A solution is then proposed that takes the receiver error into account to reduce the estimation error, and it is shown analytically, under some mild approximations, to achieve the CRLB accuracy for far-field sources. The proposed solution is closed form, computationally efficient, and does not have divergence problem as in iterative techniques. Simulations corroborate the theoretical results and the good performance of the proposed method <s> BIB004 </s> Multilateration -- source localization from range difference measurements: Literature survey <s> Conic LS <s> This tutorial text gives a unifying perspective on machine learning by covering bothprobabilistic and deterministic approaches -which are based on optimization techniques together with the Bayesian inference approach, whose essence liesin the use of a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code. <s> BIB005
In , Schmidt has shown that (in two dimensions) the RDs of three known microphones define the major axis of a general conic 2 , on which the corresponding microphones lie. In addition, the source is positioned on its focus. In three dimensions, this axis becomes a plane containing the source. The fourth (non-coplanar) microphone is needed to infer the source position r, by calculating the intersection coordinates of three such planes BIB002 . Thus, the method attains the theoretical minimum for the required number of microphones for TDOA localization. To illustrate the approach, let one such triplet of microphones be described by (r 1 , r 2 , r 3 ), and (D 1 , D 2 , D 3 ) -their position vectors, and the distances to the source, respectively. For each pair (i, j) of these microphones, we have the following expression for the product of the range sum Σ i,j and the RD d i,j : By rearranging the terms in BIB004 , and having d k,i = Σ i,j − Σ j,k , the range sums can be eliminated. Eventually, this gives the aforementioned plane equation This is a linear equation of three unknowns, thus the exact solution is obtained when three triplets (i.e. four non-coplanar microphones) are available. Browsing the literature, we found that exactly the same closed-form approach has been recently reinvented in the highly cited article BIB003 , some 30 years after Schmidt's original paper. For M microphones, one ends up with M 3 such equations (in 3D) -the classical LS solution is to stack them into a matrix form, and calculate the position r by applying the Moore-Penrose pseudoinverse. Let A pqr , B pqr , C pqr and F pqr denote the coefficients and the right hand side of the expression (10), for the microphone triplet m ∈ {p, q, r}, respectively. For all such triplets, we have where A pqr = d q,r r p(1) + d r,p r q(1) + d p,q r r(1) , B pqr = d q,r r p(2) + d r,p r q(2) + d p,q r r(2) , C pqr = d q,r r p(3) + d r,p r q(3) + d p,q r r(3) and as in BIB005 . However, such LS solution is strongly influenced by the triplets having large A · , B · , C · or F · values. Instead, as proposed in , the matrix Ψ needs to be preprocessed prior to computing the pseudoinverse -its rows should be scaled by 1/ A 2 · + B 2 · + C 2 · , as well as the corresponding entry of the vector ψ. Likewise, the presence of noise in the TDOA measurements d i,j could seriously degrade the localization accuracy. In that case, the observation model (3) contains an additive noise term, which varies accross different measurements, rendering them inconsistent. This means that the intrinsic redundancy within TDOAs does not hold, e.g d i,k = d i,j + d j,k . In the noiseless case, the vector d of concatenated TDOA measurements, lies in the range space of a simple first-order difference matrix BIB001 , specified by (3) and the ordering of distances D m . Thus, the measurements could be preconditioned, by replacing them with the closest feasible TDOAs, in the LS sense. This is done by projecting the measured d onto the range space of the finite difference matrix, or, equivalently by the technique called "TDOA averaging" BIB001 .
A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> A flame-retardant integral-skinned polyurethane foam is prepared by a method which comprises the reaction of a polyol and an organic polyisocyanate in the presence of a foaming agent comprising trichlorofluoromethane, the improvement which comprises the incorporation of a catalyst comprising a phosphorous-containing compound selected from the group consisting of an alkyl phosphite, aryl phosphites and aryl-, alkyl-, aminoaryl-, alkaryl- and halide phosphines. <s> BIB001 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> The Global Position System (GPS) and the Global Navigation Satellite System (GLONASS) are based on a satellite system. Much work has been carried out on a non-satellite positioning system using the existing Global System of Mobile Communications (GSM) infrastructure. This leads to a GPS-GSM positioning system that manufacturers claim to reliably locate a mobile phone down to resolutions of less than 125 m. The requirement needed to achieve such a resolution with a GPS/GSM positioning system is to have three GSM base stations in a 30 km area. This requirement is difficult to obtain especially in rural areas. The work carried out in this paper shows how to integrate digital audio broadcast (DAB) transmitters with GSM base stations for positioning systems. This novel DAB-GSM hybrid positioning system can reach an accuracy of 40 meters. <s> BIB002 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Cellular location methods based on angle of arrival (AOA) or time difference (e.g. E-OTD) measurements assume line-of-sight propagation between base stations and the mobile station. This assumption is not valid in urban microcellular environments. We present a database correlation method (DCM) that can utilize any location-dependent signals available in cellular systems. This method works best in densely built urban areas. An application of DCM to GSM, using signal strength measurements, is described and trial results from urban and suburban environments are given. Comparison with AOA and E-OTD trials shows that DCM is a competitive alternative for GSM location in urban and suburban environments. <s> BIB003 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Effective evaluation of positioning methods in order to give fair results when comparing different positioning technologies requires performance measurements applicable to all the positioning technologies for mobile positioning. In this paper, we outline and compare five major performance measures namely, accuracy, reliability, availability, latency, and applicability, and how they apply to positioning technologies. <s> BIB004 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> The World Trade Center (WTC) rescue response provided an unfortunate opportunity to study the human-robot interactions (HRI) during a real unstaged rescue for the first time. A post-hoc analysis was performed on the data collected during the response, which resulted in 17 findings on the impact of the environment and conditions on the HRI: the skills displayed and needed by robots and humans, the details of the Urban Search and Rescue (USAR) task, the social informatics in the USAR domain, and what information is communicated at what time. The results of this work impact the field of robotics by providing a case study for HRI in USAR drawn from an unstaged USAR effort. Eleven recommendations are made based on the findings that impact the robotics, computer science, engineering, psychology, and rescue fields. These recommendations call for group organization and user confidence studies, more research into perceptual and assistive interfaces, and formal models of the state of the robot, state of the world, and information as to what has been observed. <s> BIB005 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> The mobile phone market lacks a satisfactory location technique that is accurate, but also economical and easy to deploy. Current technology provides high accuracy, but requires substantial technological and financial investment. In this paper, we present the results of experiments intended to asses the accuracy of inexpensive Cell-ID location technique and its suitability for the provisioning of location based services. We first evaluate the accuracy of Cell-ID in urban, suburban and highway scenarios (both in U.S. and Italy), we then introduce the concepts of discovery-accuracy and discovery-noise to estimate the impact of positioning accuracy on the quality of resource discovery services. Experiments show that the accuracy of Cell-ID is not satisfactory as a general solution. In contrast we show how Cell-ID can be effectively exploited to implement more effective and efficient voice location-based services. <s> BIB006 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Observability properties of errors in an integrated navigation system are studied with a control-theoretic approach in this paper. A navigation system with a low-grade inertial measurement unit and an accurate single-antenna Global Positioning System (GPS) measurement system is considered for observability analysis. Uncertainties in attitude, gyro bias, and GPS antenna lever arm were shown to determine unobservable errors in the position, velocity, and accelerometer bias. It was proved that all the errors can be made observable by maneuvering. Acceleration changes improve the estimates of attitude and gyro bias. Changes in angular velocity enhance the lever arm estimate. However, both the motions of translation and constant angular velocity have no influence on the estimation of the lever arm. A covariance simulation with an extended Kalman filter was performed to confirm the observability analysis. <s> BIB007 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> There has been increased interest in developing location services for wireless communications systems over the last several years. Mobile network operators are continuously investigating new innovative services that allow them to increase the profit. In the days to come, location based service or LBS will be benefiting both the consumers and network operators. While the consumers can expect greater personal safety and more personalised features, the network operators will tackle discrete market segments based on the different service portfolios. This paper analyses radiolocation methods applicable to GSM/UMTS mobile station location, and compares them with other positioning techniques used today, giving an insight into the convergence of satellite and cellular positioning. <s> BIB008 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> The rapid development of wireless communications and mobile database technology promote the extensive application of Location Based Services (LBSs), and provide a greatly convenience for people's lives. In recent years, Location Based Services has played an important role in deal with public emergencies. It is possible to access mobile users' location information anytime and anywhere. But in the meantime, user location privacy security poses a potentially grave new threat, and may suffer from some invade which could not presuppose. Location privacy issues raised by such applications have attracted more and more attention. It has become the research focus to find a balance point between the location-based highly sufficient services and users' location privacy protection. Comprehensive and efficient services are accessed under the premise of less exposed locations, that is to say, allowed the location of exposure in a controlled state. K-anonymity technique is widely used in data dissemination of data privacy protection technology, but the method is also somewhat flawed. This paper analyses on existing questions of location privacy protection system in Location Based Services at the present time, including K-anonymity technique, quality of service, query systems, and generalize and summarize the main research achievement of location privacy protection technology in recent years. And some solutions have been proposed to deal with location privacy problem in Location Based Services. The paper also analyzes how to provide efficient location-based services and better protection users' location privacy in handle public emergencies. In the end, some study trends of Location Based Services and location privacy protection are given. <s> BIB009 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Due to an increasing number of public and private access points in indoor and urban environments, Wi-Fi positioning becomes more and more attractive for pedestrian navigation. In the last ten years different approaches and solutions have been developed. But Wi-Fi hardware and network protocols have not been designed for positioning. Therefore, Wi-Fi devices have different hardware characteristics that lead to different positioning accuracies. In this article we analyze and discuss hardware characteristics of Wi-Fi devices with a focus on the so called Wi-Fi fingerprinting technique for positioning. The analysis is based on measurements collected using a static setup in an anechoic chamber to minimize signal reflections and noise. Characteristics like measurement offsets and practical polling intervals of different mobile devices have been examined. Based on this analysis a calibration approach to compensate the measurement offsets of Wi-Fi devices is proposed. Experimental results in a typically office building are presented to evaluate the improvement in localization accuracy using the calibration approach. <s> BIB010 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Location positioning systems using wireless area local network (WLAN) infrastructure are considered cost effective and practical solutions for indoor location tracking and estimation. However, accuracy deterioration due to environmental factors and the need for manual offline calibration limit the application of these systems. In this paper, a new method based on differential operation access points is proposed to eliminate the adverse effects of environmental factors on location estimation. The proposed method is developed based on the operation of conventional differential amplifiers where noise and interference are eliminated through a differential operation. A pair of properly positioned access points is used as a differential node to eliminate the undesired effects of environmental factors. As a result the strength of received signals, which is used to determine the location of a user, remains relatively stable and supports accurate positioning. To estimate wave propagation in indoor environments, log-distance path loss model has been employed at the system level. Experimental results indicate that the proposed method can effectively reduce the location estimation error and provide accuracy improvement over existing methods. <s> BIB011 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> With the growing number of smart phones, the positioning issue for location-based service (LBS) in cellular networks has got much more popular and important in recent years. This paper proposes a signal-aware fingerprinting-based positioning technique (SAFPPT) in cellular networks, which contains two positioning methods: (i) Continuously Measured Positioning Method (CMPM) and (ii) Stop-and-Measured Positioning Method (SMPM). To verify SAFPPT, we have the case study of implementing the aforementioned two positioning methods in the Android platform for the GSM networks and analyze accuracy using some experiments. Our experimental results show that: (i) the positioning accuracy of CMPM and SMPM are much higher than Googlei¦s i§My Locationi¨, (ii) some parameters may affect the positioning accuracy of CMPM, e.g., the moving speed of a user and the number of sampling of the fingerprinting database, and (iii) the staying time of a user may affect the positioning accuracy of SMPM. <s> BIB012 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Location-based systems for indoor positioning have been studied widely owing to their application in various fields. The fingerprinting approach is often used in Wi-Fi positioning systems. The K-nearest-neighbor fingerprinting algorithm uses a fixed number of neighbors, which reduces positioning accuracy. Here, we propose a novel fingerprinting algorithm, the enhanced weighted K-nearest neighbor (EWKNN) algorithm, which improves accuracy by changing the number of considered neighbors. Experimental results show that the proposed algorithm gives higher accuracy. <s> BIB013 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> This paper introduces a novel approach to model Received Signal Strength (RSS) measurements in cellular networks for user positioning needs. The RSS measurements are simulated by constructing a synthetic statistical cellular network, based on empirical data collected from a real life network. These statistics include conventional path loss model parameters, shadowing phenomenon including spatial correlation, and probabilities describing how many cell identities are measured at a time. The performance of user terminal positioning in the synthetic model is compared with real life measurement scenario by using a fingerprinting based K-nearest neighbor algorithm. It is shown that the obtained position error distributions match well with each other. The main advantage of the introduced network design is the possibility to study the performance of various position algorithms without requiring extensive measurement campaigns. In particular the model is useful in dimensioning different radio environment scenarios and support in preplanning of measurement campaigns. In addition, repeating the modeling process with different random values, it is possible to study uncommon occurrences in the system which would be difficult to reveal with limited real life measurement sets. <s> BIB014 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> This paper proposes a hybrid scheme for user positioning in an urban scenario using both a Global Navigation Satellite System (GNSS) and a mobile cellular network. To maintain receiver complexity (and costs) at a minimum, the location scheme combines the time-difference-of-arrival (TDOA) technique measurements obtained from the cellular network with GNNS pseudorange measurements. The extended Kalman filter (EKF) algorithm is used as a data integration system over the time axis. Simulated results, which are obtained starting from real measurements, demonstrate that the use of cellular network data may provide increased location accuracy when the number of visible satellites is not adequate. In every case, the obtained accuracy is within the limits required by emergency location services, e.g., Enhanced 911 (E911). <s> BIB015 </s> A survey of positioning techniques and location based services in wireless networks <s> I. INTRODUCTION <s> Thank you for reading 80211 wireless networks the definitive guide. As you may know, people have look numerous times for their favorite books like this 80211 wireless networks the definitive guide, but end up in malicious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they cope with some infectious virus inside their computer. 80211 wireless networks the definitive guide is available in our book collection an online access to it is set as public so you can get it instantly. Our digital library hosts in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the 80211 wireless networks the definitive guide is universally compatible with any devices to read. <s> BIB016
Numerous geolocation technologies are used to estimate client (person or object) geographical position. The large diversification of existing wireless Radio Access Technologies (RAT) and the increasing number of wirelessenabled devices are promoting the extensive application of Location Based Services (LBS) BIB009 . Position-dependant services include: emergency services such as rescue response BIB005 and security alerts, entertainment services like mobile gaming , medical applications BIB007 and a wide variety of other applications. Global Positioning System (GPS) is the most common technology which supports outdoor locating services . Satellites orbiting around the Earth continuously broadcast their own position and direction. Broadcasted signals are used by the receivers to estimate satellites positions as well as the distance between satellite and receiver. Having these distance measurements, trilateration BIB001 is usually used to estimate receivers position. Accuracy of the estimated positions depends on the number of visible satellites. Hence, it does not work well for indoor positioning, and it depends on weather conditions. Base stations of the mobile terrestrial radio access networks, such as Global System for Mobile communications (GSM), are the reference points for mobile client localization. Cell-Identification (Cell-ID) estimates client position using the geographical coordinates of its serving base station in cellular networks BIB008 BIB004 BIB006 . Other positioning methods are based on the fingerprinting database, and they make use of Received Signal Strength (RSS) measurements BIB003 BIB014 BIB012 . However, positioning accuracy is restrained by interference, multipath and non line-of-sight (NLOS) propagation. A Wireless Local Area Network (WLAN) offers connectivity and internet access for wireless enabled clients within its coverage area. For example, IEEE 802.11 BIB016 (commonly known as Wi-Fi) is widely deployed, and it is also used for localizing Wi-Fi enabled devices. The fingerprinting approach is often used in Wi-Fi positioning systems BIB010 BIB013 . It is based on Received Signal Strength Indication (RSSI) measurements in the localization area. Positioning systems using Wi-Fi are considered cost effective and practical solutions for indoor location tracking and estimation BIB011 . Positioning techniques performance comparison is done using several metrics BIB004 such as applicability, latency, reliability, accuracy and cost. A technique is more accurate when the estimated position of a client is closer to the real geographical position. Positioning accuracy is getting more important with the increasing use of position dependant applications. Indeed, it is crucial for emergency location services. Hence, hybrid positioning systems, such as BIB015 BIB002 , are introduced to improve accuracy and reliability of the existing localization technologies. In this paper, we describe the main positioning techniques used in satellite networks like GPS, in mobile networks, such as GSM, and in wireless local area networks such as Wi-Fi. The coexistence of several wireless radio access technologies in the same area allows the introduction of hybrid positioning systems, and promotes the diversification of position dependant services. We explain some of the hybrid localization techniques that coordinate information received from different radio access technologies in order to improve positioning accuracy. Such improvements increase user satisfaction, and make LBS more robust and efficient. We also classify these services into several categories. The rest of the paper is organized as follows: in (II) we explain the principles behind positioning techniques used in satellite and mobile networks. Wi-Fi localization methods are reported in (III). We describe hybrid positioning systems in (IV). Section (V) contains a classification of LBS. Concluding remarks are given in section (VI).
A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> In this paper, time of arrival (TOA) and angle of arrival (AOA) errors in four typical cellular environments are analyzed and modeled. Based on the analysis, a hybrid TOA/AOA positioning (HTAP) algorithm, which utilizes TOA and AOA information delivered by serving base stations (BS), is proposed. The performance of the related positioning algorithms is simulated. It is shown that when the MS is close to the serving BS, HTAP will produce an accurate location estimate. When MS is far from the serving BS, the location estimate obtained by HTAP can be used as an initial location in our system to help a least square (LS) algorithm converge easily. When there are more than three TOA detected, weights and TOA numbers used in the LS algorithm should be dynamically adjusted according to the distance between MS and serving BS and the propagation environment for better positioning performance. <s> BIB001 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> Currently in development, numerous geolocation technologies can pinpoint a person's or object's position on the Earth. Knowledge of the spatial distribution of wireless callers will facilitate the planning, design, and operation of next generation broadband wireless networks. Mobile users will gain the ability to get local traffic information and detailed directions to gas stations, restaurants, hotels, and other services. Police and rescue teams will be able to quickly and precisely locate people who are lost or injured but cannot give their precise location. Companies will use geolocation based applications to track personnel, vehicles, and other assets. The driving force behind the development of this technology is a US Federal Communications Commission (FCC) mandate stating that by 1 October 2001 all wireless carriers must provide the geolocation of an emergency 911 caller to the appropriate public safety answering point. Location technologies requiring new modified, or upgraded mobile stations must determine the caller's longitude and latitude within 50 meters for 67 percent of emergency calls, and within 150 meters for 95 percent of the calls. Otherwise, they must do so within 100 meters and 300 meters, respectively, for the same percentage of calls. Currently deployed wireless technology can locate 911 calls within an area no smaller than 10 to 15 square kilometers. It is argued that assisted-GPS technology offers superior accuracy, availability, and coverage at a reasonable cost. <s> BIB002 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> Cellular location methods based on angle of arrival (AOA) or time difference (e.g. E-OTD) measurements assume line-of-sight propagation between base stations and the mobile station. This assumption is not valid in urban microcellular environments. We present a database correlation method (DCM) that can utilize any location-dependent signals available in cellular systems. This method works best in densely built urban areas. An application of DCM to GSM, using signal strength measurements, is described and trial results from urban and suburban environments are given. Comparison with AOA and E-OTD trials shows that DCM is a competitive alternative for GSM location in urban and suburban environments. <s> BIB003 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> This paper deals with the problem of estimating the position of a user equipment operating in a wireless communication network. We present a new positioning method based on the angles of arrival (AOA) measured in several radio links between that user equipment and different base stations. The proposed AOA-based method leads us to a non-iterative closed-form solution of the positioning problem, and an statistical analysis of that solution is also included. The comparison between this method and the classical AOA-based positioning technique is discussed in terms of computational load, convergence of the solution and also in terms of the bias and variance of the position estimate. <s> BIB004 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> This paper presents a comparison of error characteristics between time of arrival (TOA) and time difference of arrival (TDOA) processing of the linearized GPS pseudo-range equations. In particular, the relationship among dilutions of precision (DOPs), position estimates, and their error covariances is investigated. DOPs for TDOA are defined using the error covariance matrix resulting from TDOA processing. It is shown that the DOPs and user position estimate are the same for TDOA and TOA processing. The relationship of DOPs and position estimates for standard GPS positioning and double differenced processing are also given. <s> BIB005 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> The ray-tracing (RT) algorithm has been used for accurately predicting the site-specific radio propagation characteristics, in spite of its computational intensity. Statistical models, on the other hand, offers computational simplicity but low accuracy. In this paper, a new model is proposed for predicting the indoor radio propagation to achieve computational simplicity over the RT method and better accuracy than the statistical models. The new model is based on the statistical derivation of the ray-tracing operation, whose results are a number of paths between the transmitter and receiver, each path comprises a number of rays. The pattern and length of the rays in these paths are related to statistical parameters of the site-specific features of indoor environment, such as the floor plan geometry. A key equation is derived to relate the average path power to the site-specific parameters, which are: 1) mean free distance; 2) transmission coefficient; and 3) reflection coefficient. The equation of the average path power is then used to predict the received power in a typical indoor environment. To evaluate the accuracy of the new model in predicting the received power in a typical indoor environment, a comparison with RT results and with measurement data shows an error bound of less than 5 dB. <s> BIB006 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> Position information of individual nodes is useful in implementing functions such as routing and querying in ad-hoc networks. Deriving position information by using the capability of the nodes to measure time of arrival (TOA), time difference of arrival (TDOA), angle of arrival (AOA) and signal strength have been used to localize nodes relative to a frame of reference. The nodes in an ad-hoc network can have multiple capabilities and exploiting one or more of the capabilities can improve the quality of positioning. In this paper, we show how AOA capability of the nodes can be used to derive position information. We propose a method for all nodes to determine their orientation and position in an ad-hoc network where only a fraction of the nodes have positioning capabilities, under the assumption that each node has the AOA capability. <s> BIB007 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> Today's rapidly evolving positioning location technology can be possible to determine the geographical position of mobile phone. This is related to many new services of the next revolution mobile communication system. It is important to find out what location technology is suitable for operating on GSM network. However a success of service must concern with the lowest possible cost and minimal impact on the network infrastructure and subscriber equipment because GSM system were not originally designed for positioning. Thus we present to study of the E-OTD location technology for improving accuracy and service stability. <s> BIB008 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> The mobile phone market lacks a satisfactory location technique that is accurate, but also economical and easy to deploy. Current technology provides high accuracy, but requires substantial technological and financial investment. In this paper, we present the results of experiments intended to asses the accuracy of inexpensive Cell-ID location technique and its suitability for the provisioning of location based services. We first evaluate the accuracy of Cell-ID in urban, suburban and highway scenarios (both in U.S. and Italy), we then introduce the concepts of discovery-accuracy and discovery-noise to estimate the impact of positioning accuracy on the quality of resource discovery services. Experiments show that the accuracy of Cell-ID is not satisfactory as a general solution. In contrast we show how Cell-ID can be effectively exploited to implement more effective and efficient voice location-based services. <s> BIB009 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> In this paper we study the temporal statistics of cellular mobile channel. We propose a scattering model that encloses scatterers in an elliptical scattering disc. We, further, employ this model to derive the probability density function (pdf) of Time of Arrival (ToA) of the multipath signal for picocell, microcell, and macrocell environments. For macrocell environment, we present generic closed-form formula for the pdf of ToA from which previous models can be easily deduced. Proposed theoretical results can be used to simulate temporal dispersion of the multipath signal in a variety of propagation conditions. The presented results help in the design of efficient equalizers to combat intersymbol interference (ISI) for wideband systems. <s> BIB010 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> This paper presents a new approach to providing accurate pedestrian indoor positioning using a Time of Arrival (TOA) based technique, when only two access points in an IEEE 802.11 network are in range of a mobile terminal to be located. This allows to enhance the availability and reliability of the positioning system, because existing trilateration-and tracking-based systems require at least three reference points to provide 2D positions. This contribution demonstrates the feasibility of the technique proposed and presents encouraging performance figures obtained through simulations with real observable data. <s> BIB011 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> This paper presents an approach to calibrate GPS position by using the context awareness technique from the pervasive computing. Previous researches on GPS calibration mostly focus on the methods of integrating auxiliary hardware so that the userpsilas context information and the basic demand of the user are ignored. From the inspiration of the pervasive computing research, this paper proposes a novel approach, called PGPS (Perceptive GPS), to directly improve GPS positioning accuracy from the contextual information of received GPS data. PGPS is started with sampling received GPS data to learning carrierpsilas behavior and building a transition probability matrix based upon HMM (Hidden Markov Model) model and Newtonpsilas Laws. After constructing the required matrix, PGPS then can interactively rectify received GPS data in real time. That is, based on the transition matrix and received online GPS data, PGPS infers the behavior of GPS carrier to verify the rationality of received GPS data. If the received GPS data deviate from the inferred position, the received GPS data is then dropped. Finally, an experiment was conducted and its preliminary result shows that the proposed approach can effectively improve the accuracy of GPS position. <s> BIB012 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> This paper describes locating systems in indoors half-manufactured environments using wireless communications framework. Such framework is available through existing communications hardware as is the case of ZigBee standard. Using this framework as locating system can provide trilateration scheme using receiver signal strength indication (RSSI) measurements. An experiment shows RSSI measurement errors and some filters are developed to minimize them. These results present some insights to future RSSI development strategies in order to obtain efficient locating systems. <s> BIB013 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> This paper addresses the Common Radio Resources Management of a Heterogeneous Network composed of three different Radio Access Network (RANs): UMTS, Wi-Fi and Mobile-WiMAX. The network is managed by an algorithm based on a priority table, user sessions being steered preferentially to a given RAN according to the service type. Six services are defined, including Voice, Web browsing and Email. A time-based, system-level, simulation tool was developed in order to evaluate the network performance. Results show that blocking probability and average delay are optimised (minimised) when non-conversational user data sessions are steered preferentially to M-WiMAX. An overall network degradation by selective RAN deactivation is maximised when M-WiMAX is switched off, blocking probability and average delay raising 10 and 100 times, respectively. The average delay is reduced 20 to 30 % when the channel bandwidth of M-WiMAX is increased from 5 to 10 MHz. <s> BIB014 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> Localizing a user is a fundamental problem that arises in many potential applications. The use of wireless technologies for locating a user has been a trend in recent years. Most existing approaches use RSSI to localize the user. In general, one of the several existing wireless standards such as ZigBee, Bluetooth or Wi-Fi, is chosen as the target standard. An interesting question that has practical implications is whether there is any benefit in using more than one wireless technology to perform the localization. In this paper we present a study on the advantages and challenges of using multiple wireless technologies to perform localization in indoor environments. We use real ZigBee, Wi-Fi and Bluetooth compliant devices. In our study we analyse results obtained using the fingerprint method. The performance of each technology alone and the performance of the technologies combined are also investigated. We also analyse how the number of wireless devices used affects the quality of localization and show that, for all technologies, more beacons lead to less error. Finally, we show how interference among technologies may lead to lower localization accuracy. <s> BIB015 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> Radio propagation model simulates of electromagnetic wave propagation in space and calculates the strength of radio signals. It provides the basis for forecasting, analysis and optimization of wireless network communication. In this paper, various types of radio propagation model have been analyzed and discussed, and a solution based on radio propagation model for indoor wireless network planning was proposed. This paper also analyzed fast 3D modeling of buildings, and model optimization, computing acceleration technologies of indoor wireless propagation model. <s> BIB016 </s> A survey of positioning techniques and location based services in wireless networks <s> II. POSITIONING IN GPS AND MOBILE NETWORKS <s> The wide deployment of Wi-Fi networks empowers the implementation of numerous applications such as Wi-Fi positioning, Location Based Services (LBS), wireless intrusion detection and real-time tracking. Many techniques are used to estimate Wi-Fi client position. Some of them are based on the Time or Angle of Arrival (ToA or AoA), while others use signal power measurements and fingerprinting. All these techniques require the reception of multiple wireless signals to provide enough data for solving the localization problem. In this paper, we describe the major techniques used for positioning in Wi-Fi networks. Real experiments are done to compare the accuracy of methods that use signal power measurement and Received Signal Strength Indication (RSSI) fingerprinting to estimate client position. Moreover, we investigate a fingerprinting method constrained by distance information to improve positioning accuracy. Localization techniques are more accurate when the estimated client positions are closer to the real geographical positions. Accuracy improvements increase user satisfaction, and make the localization services more robust and efficient. <s> BIB017
GPS consists of a network of 24 satellites in six different 12-hour orbital paths spaced so that at least five are in view from every point on the globe . Satellites serve as reference points when estimating client position. They continuously broadcast signals containing information about their own position and direction. Distance between satellite and receiver is determined by precisely measuring the time it takes a signal to travel from the satellite to the receivers antenna. Once the distances between visible satellites and the GPS receiver are measured, client position is estimated via the trilateration method, commonly known as triangulation . Three distance measurements are required to perform position estimation. In fact, the estimated position is the intersection of three spheres having the satellites as centers and the calculated distances as radii. GPS accuracy is largely reduced by several factors such as signal delays, satellite clock errors, multipath distortion, receiver noise and various environment noises BIB012 . To overcome visibility problems between satellites and receivers, assisted-GPS is proposed. It benefits from the coexistence of satellite networks along with terrestrial wireless access networks (i.e. mobile networks or Wireless Local Area Networks) in the same area. Therefore, superior accuracy, availability and coverage are offered for indoor use or in urban areas. Refer to BIB002 for more information about Assisted-Global Positioning System. In mobile networks, such as GSM, many techniques are used to estimate client position. Contrarily to the satellites that are continuously moving around the globe, base stations (BS) of the mobile networks have fixed geographical positions. In addition, each BS broadcasts its Cell-ID and Location Area Identifier (LAI) to the mobiles within its coverage area. Therefore, each mobile can approximate its own position using the geographical coordinates of its serving base station in Cell-ID method BIB009 . Angle of Arrival (AoA) measurements BIB004 BIB007 of several radio links between the base stations and the mobile are also used to estimate client position. Hence, user position is approximated according to these angle measurements and using information about base stations geographical coordinates. Time of Arrival (ToA) BIB010 BIB001 requires synchronization between the different network elements (i.e., base stations and mobile stations). The time difference between bursts sent by the mobile are converted into distance. Hence, trilateration is used to estimate client position. Other methods use received signal strength measurements to localize mobile stations. For example, received signal power is converted into distance via propagation models or empirical models. In addition, the fingerprinting method BIB003 compares RSS measurements with the values stored in a database for specific points in the localization map in order to approximate client position. Time Difference of Arrival (TDoA) BIB005 technique is inspired by ToA. Indeed, in ToA, the positioning entity measures signal propagation time from the emitter to the receiver. Time measurement is converted into distance that is used to estimate client position. However, TDoA technique requires the simultaneous transmission (for each base station) of two signals having different frequencies. These signals will reach the receiver at different times. Therefore, the time difference is measured and converted into distance. Once we have three distance measurements, trilateration is used to estimate client position. Enhanced Observed Time Difference (E-OTD) BIB008 requires synchronization between network entities (base stations and mobiles). Each base station broadcasts messages within its coverage area. A mobile station compares the relative times of arrival of these messages to estimate its distance from each visible base station. Localization techniques in mobile networks can be classified in two categories: network-based and clientbased. In network-based positioning techniques, the network collects necessary information to estimate client position. Time, angle or distance measurements performed by the base stations are usually forwarded to a positioning server deployed in the network. The required information when estimating user position is stored in the positioning database. Thus, positioning server has information about the positions of all the users in the system. However, client-based localization techniques are characterized by the absence of a centralized positioning entity. In fact, each client performs time, angle, power or distance measurements locally. Thus, it approximates its own position using local measurements and information broadcasted from the base stations. Fig. 1 shows a qualitative comparison of the positioning techniques described in this section. Performance criteria used for localization techniques comparison are accuracy and coverage. A positioning technique is better when it has lower accuracy error (distance between the estimated position and the real geographical position) and greater coverage. Traditionally, a Wi-Fi network provides Internet access to wireless-enabled clients located within its coverage area. In addition, it allows interconnectivity between wireless devices existing in the same network. Recently, Wi-Fi networks are having additional applications. For example, we can benefit of the coexistence of several radio access technologies in the same geographical area. Heterogeneous networks offer the possibility to steer user sessions preferentially to a given Radio Access Technology, such as Wi-Fi or Universal Mobile Terrestrial radio access System (UMTS), according to service type BIB014 and network load. Moreover, the wide deployment of Wi-Fi networks allows the introduction of numerous location-based services. Wi-Fi positioning techniques are similar to those used in mobile networks. However, the most common technique used to localize a client in Wi-Fi networks is based on RSSI measurements. In the remainder of this section, we classify Wi-Fi positioning techniques into several categories, and we describe the basics of RSSI-based localization methods. ToA BIB011 and TDoA perform time measurements to calculate the distance between Wi-Fi client and access points. Hence, three distance measurements are required to estimate user position via trilateration BIB013 . Such methods belong to the category of time-based positioning techniques, and they require time synchronization between network entities. In Cell-ID category, users scan the received radio beacons to estimate the closer access point. They use either predefined radio propagation models or experimental fingerprinting data to estimate user position. AoA method uses directional antennas to measure the angle of arrival of signals transmitted by the clients. Hence, client position is estimated via the geometry of triangles in angle-based positioning techniques. However, the most common positioning techniques in Wi-Fi networks are based on RSSI measurements BIB017 . Some of them are based on propagation models BIB016 to translate signal power into distance. Other methods use empirical models, and they store RSSI measurements in a positioning database. Therefore, localization methods in Wi-Fi are classified into four main categories: Cell-ID, time, RSSI and angle. Fig. 3 illustrates the classification of Wi-Fi positioning techniques. Received signal strength indication measurements are quantified levels that reflect the real power of a received wireless signal. When propagating in free space, the transmitted radio frequency signal is subject to degradation due to attenuation, reflection, diffraction and scattering. Indeed, several propagation models formulate signal strength degradation as a function of the traveled distance and the transmission frequency. For instance, Hata-Okumura model approximates Path Loss (PL) according to the distance between emitter and receiver, antenna characteristics and transmission frequency. Hence, RSSI measurements are compared with theoretical values of the received power (calculated using propagation models) in order to find the distance traveled by the signal. Three distance measurements are required to estimate Wi-Fi-enabled client position via trilateration. In fact, it is the intersection of three circles having the access points as centers and the calculated distances as radii. Other positioning techniques based on RSSI measurements use empirical models to estimate user position. Instead of approximating the distance between Wi-Fi clients and access points, the localization area is divided into smaller parts using a grid. Each point of the grid receives several Wi-Fi signals from the neighboring access points. RSSI measurements are performed under different conditions (e.g. time, interference, network load) in order to increase positioning accuracy. If n is the number of WiFi access points, an n-tuple (RSSI 1 , RSSI 2 , ..., RSSI n ) containing mean RSSI values is created for each point (x, y) in the map. Such positioning technique is called RSSI fingerprinting, and it occurs in two phases: an offline fingerprinting phase and an online positioning phase. In the first phase, RSSI measurements are done for each point in the positioning map (under different network conditions). At the end of this phase, positioning database is created. It contains mean RSSI values for every point in the grid BIB006 . However, the second phase performs online RSSI measurements for signals received from the neighboring access points. The positioning entity compares live RSSI measurements with the values stored in the database. Therefore, client position is estimated as the entry (x, y) in the database that best matches the actual measurements BIB015 . Accuracy of RSSI-based positioning techniques in Wi-Fi networks depends on the number of access points involved in the localization problem.
A survey of positioning techniques and location based services in wireless networks <s> IV. HYBRID POSITIONING SYSTEM <s> In this paper, time of arrival (TOA) and angle of arrival (AOA) errors in four typical cellular environments are analyzed and modeled. Based on the analysis, a hybrid TOA/AOA positioning (HTAP) algorithm, which utilizes TOA and AOA information delivered by serving base stations (BS), is proposed. The performance of the related positioning algorithms is simulated. It is shown that when the MS is close to the serving BS, HTAP will produce an accurate location estimate. When MS is far from the serving BS, the location estimate obtained by HTAP can be used as an initial location in our system to help a least square (LS) algorithm converge easily. When there are more than three TOA detected, weights and TOA numbers used in the LS algorithm should be dynamically adjusted according to the distance between MS and serving BS and the propagation environment for better positioning performance. <s> BIB001 </s> A survey of positioning techniques and location based services in wireless networks <s> IV. HYBRID POSITIONING SYSTEM <s> Currently in development, numerous geolocation technologies can pinpoint a person's or object's position on the Earth. Knowledge of the spatial distribution of wireless callers will facilitate the planning, design, and operation of next generation broadband wireless networks. Mobile users will gain the ability to get local traffic information and detailed directions to gas stations, restaurants, hotels, and other services. Police and rescue teams will be able to quickly and precisely locate people who are lost or injured but cannot give their precise location. Companies will use geolocation based applications to track personnel, vehicles, and other assets. The driving force behind the development of this technology is a US Federal Communications Commission (FCC) mandate stating that by 1 October 2001 all wireless carriers must provide the geolocation of an emergency 911 caller to the appropriate public safety answering point. Location technologies requiring new modified, or upgraded mobile stations must determine the caller's longitude and latitude within 50 meters for 67 percent of emergency calls, and within 150 meters for 95 percent of the calls. Otherwise, they must do so within 100 meters and 300 meters, respectively, for the same percentage of calls. Currently deployed wireless technology can locate 911 calls within an area no smaller than 10 to 15 square kilometers. It is argued that assisted-GPS technology offers superior accuracy, availability, and coverage at a reasonable cost. <s> BIB002 </s> A survey of positioning techniques and location based services in wireless networks <s> IV. HYBRID POSITIONING SYSTEM <s> Localizing a user is a fundamental problem that arises in many potential applications. The use of wireless technologies for locating a user has been a trend in recent years. Most existing approaches use RSSI to localize the user. In general, one of the several existing wireless standards such as ZigBee, Bluetooth or Wi-Fi, is chosen as the target standard. An interesting question that has practical implications is whether there is any benefit in using more than one wireless technology to perform the localization. In this paper we present a study on the advantages and challenges of using multiple wireless technologies to perform localization in indoor environments. We use real ZigBee, Wi-Fi and Bluetooth compliant devices. In our study we analyse results obtained using the fingerprint method. The performance of each technology alone and the performance of the technologies combined are also investigated. We also analyse how the number of wireless devices used affects the quality of localization and show that, for all technologies, more beacons lead to less error. Finally, we show how interference among technologies may lead to lower localization accuracy. <s> BIB003 </s> A survey of positioning techniques and location based services in wireless networks <s> IV. HYBRID POSITIONING SYSTEM <s> This paper proposes a hybrid scheme for user positioning in an urban scenario using both a Global Navigation Satellite System (GNSS) and a mobile cellular network. To maintain receiver complexity (and costs) at a minimum, the location scheme combines the time-difference-of-arrival (TDOA) technique measurements obtained from the cellular network with GNNS pseudorange measurements. The extended Kalman filter (EKF) algorithm is used as a data integration system over the time axis. Simulated results, which are obtained starting from real measurements, demonstrate that the use of cellular network data may provide increased location accuracy when the number of visible satellites is not adequate. In every case, the obtained accuracy is within the limits required by emergency location services, e.g., Enhanced 911 (E911). <s> BIB004
The wide usage of position dependant services increases the need for more accurate position estimation techniques. Due to the limitations of positioning methods that use data from one single RAT, hybrid positioning techniques are proposed to increase accuracy. They make use of the collaboration between different wireless access networks existing in the same geographical area, such as GPS and GSM, to exchange additional position related data. The main factors that reduce accuracy of GPS are multipath distortion and visibility problem between satellites and receivers . Therefore, a hybrid positioning technique called Assisted-Global Positioning System (A-GPS) BIB002 is introduced to overcome these limitations. In fact, GSM Base Transceiver Stations (BTS) are involved in the positioning problem along with the satellites. Additional data about BTS geographical position and proximity to the mobile is used together with GPS localization information in order to estimate client position. Moreover, authors of propose a Wi-Fi GPS based combined positioning algorithm. Indeed, localization data from the Wi-Fi network is used when the number of visible satellites is less than four. In cellular networks, positioning accuracy is restrained by the non-line-of-sight propagation and by interference mitigation techniques. Thus, inconsistent information is provided as entry when resolving the localization problem. Authors of BIB004 describe a hybrid positioning scheme that combines TDoA technique measurements obtained from the cellular network with GPS range information to improve the accuracy of mobile client position estimation. The obtained accuracy should be within the limits required by emergency location services. ToA technique is very frequently used for positioning in mobile networks. However, it requires synchronization between mobiles and base stations. In addition, ToA is restricted by multipath and non-line-of-sight propagation problems. Hence, this positioning technique is assisted by additional information from AoA technique. Angle measurements are performed by the serving base station using antenna arrays. The usage of ToA assisted AoA technique improves positioning accuracy, especially in bad propagation environments BIB001 . In wireless local area networks, such as Wi-Fi, RSSI fingerprinting technique depends on the number of access points involved in the localization process. The validity of mean RSSI measurements stored in the database also affects positioning accuracy. We exploit the coexistence of Personal Area Networks (PAN) along with Wi-Fi in the same geographical area to improve positioning accuracy. In fact, RSSI measurements are done for all the existing wireless technologies, and results are stored in the positioning database. Therefore, Wi-Fi fingerprinting uses additional RSSI information from wireless PANs existing in the same area such as Bluetooth and ZigBee networks. More information about this hybrid technique is found in BIB003 .
A survey of positioning techniques and location based services in wireless networks <s> V. LOCATION BASED SERVICES <s> The penetration of mobile wireless technologies has resulted in larger usage of wireless data services in the recent past. Several wireless applications are deployed by service providers, to attract and retain their clients, using wireless Internet technologies. New and innovative applications like ringtone/wallpaper downloading, MMS-messaging, videoclip delivery and reservation enquiries are some of the popular services offered by the service providers today. The knowledge of mobile user's location by the service provider can enhance the class of services and applications that can be offered to the mobile user. These class of applications and services, termed "location based services", are becoming popular across all mobile networks like GSM and CDMA. This paper presents a brief survey of location based services, the technologies deployed to track the mobile user's location, the accuracy and reliability associated with such measurements, and the network infrastructure elements deployed by the wireless network operators to enable these kinds of services. A brief description of all the protocols and interfaces covering the interaction between device, gateway and application layers, are presented. The aspects related to billing of value added services using the location information and emerging architectures for incorporating this "location based charging" model are introduced. The paper also presents some popular location based services deployed on wireless across the world. <s> BIB001 </s> A survey of positioning techniques and location based services in wireless networks <s> V. LOCATION BASED SERVICES <s> The rapid development of wireless communications and mobile database technology promote the extensive application of Location Based Services (LBSs), and provide a greatly convenience for people's lives. In recent years, Location Based Services has played an important role in deal with public emergencies. It is possible to access mobile users' location information anytime and anywhere. But in the meantime, user location privacy security poses a potentially grave new threat, and may suffer from some invade which could not presuppose. Location privacy issues raised by such applications have attracted more and more attention. It has become the research focus to find a balance point between the location-based highly sufficient services and users' location privacy protection. Comprehensive and efficient services are accessed under the premise of less exposed locations, that is to say, allowed the location of exposure in a controlled state. K-anonymity technique is widely used in data dissemination of data privacy protection technology, but the method is also somewhat flawed. This paper analyses on existing questions of location privacy protection system in Location Based Services at the present time, including K-anonymity technique, quality of service, query systems, and generalize and summarize the main research achievement of location privacy protection technology in recent years. And some solutions have been proposed to deal with location privacy problem in Location Based Services. The paper also analyzes how to provide efficient location-based services and better protection users' location privacy in handle public emergencies. In the end, some study trends of Location Based Services and location privacy protection are given. <s> BIB002
In wireless networks, the knowledge of user geographical position allows the introduction of numerous position dependant applications. These applications are known as location-based services, and they are useful for service providers as well as for mobile clients. The wide deployment of radio access technologies and the increasing development of wireless-enabled devices are promoting the extensive use of LBS. As mentioned in the previous sections, numerous positioning techniques are used to estimate client position. However, the main parameter for LBS efficiency is accuracy of the positioning technique. Indeed, users are more satisfied when the estimated position is closer to the real geographical position, and when the probability of erroneous estimations is reduced. Location-based services are related to the position of the user making the request. They are classified BIB001 as emergency services (e.g. security alerts, public safety and query of the nearest hospital), informational services (i.e., news, sports, stocks and query of the nearest hotel or cinema), tracking services (like asset/fleet/logistic monitoring or person tracking), entertainment services (for example: locating a friend and gaming) and advertising services (such as announcements or invitation messages broadcasted by the shops to the nearby mobile clients). Moreover, future applications of LBS include support to the studies on climate change, seismology and oceanography. Position dependant services are useful for mobile mapping, deformation monitoring and many civil engineering applications . They have revolutionized navigation (on land, in the air and at sea) and intelligent transportation system by increasing their safety and efficiency. However, user location privacy security poses a potentially grave threat BIB002 . In fact, it is possible to access user location information anytime and anywhere. Therefore, many privacy protection methods are introduced to deal with the contradiction between location privacy protection and quality of service in LBS. Some of them protect user ID information by hiding the true ID when requesting the service. Other methods do not submit the exact location to the server, but they send a region containing user exact position.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. Often, however, we have plentiful labeled training data from a source domain but wish to learn a classifier which performs well on a target domain with a different distribution and little or no labeled training data. In this work we investigate two questions. First, under what conditions can a classifier trained from source data be expected to perform well on target data? Second, given a small amount of labeled target data, how should we combine it during training with the large amount of labeled source data to achieve the lowest target error at test time?We address the first question by bounding a classifier’s target error in terms of its source error and the divergence between the two domains. We give a classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains. Under the assumption that there exists some hypothesis that performs well in both domains, we show that this quantity together with the empirical source error characterize the target error of a source-trained classifier.We answer the second question by bounding the target error of a model which minimizes a convex combination of the empirical source and target errors. Previous theoretical work has considered minimizing just the source error, just the target error, or weighting instances from the two domains equally. We show how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. The resulting bound generalizes the previously studied cases and is always at least as tight as a bound which considers minimizing only the target error or an equal weighting of source and target errors. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Significant advances have been made towards building accurate automatic segmentation systems for a variety of biomedical applications using machine learning. However, the performance of these systems often degrades when they are applied on new data that differ from the training data, for example, due to variations in imaging protocols. Manually annotating new data for each test domain is not a feasible solution. In this work we investigate unsupervised domain adaptation using adversarial neural networks to train a segmentation method which is more invariant to differences in the input data, and which does not require any annotations on the test domain. Specifically, we learn domain-invariant features by learning to counter an adversarial network, which attempts to classify the domain of the input data by observing the activations of the segmentation network. Furthermore, we propose a multi-connected domain discriminator for improved adversarial training. Our system is evaluated using two MR databases of subjects with traumatic brain injuries, acquired using different scanners and imaging protocols. Using our unsupervised approach, we obtain segmentation accuracies which are close to the upper bound of supervised domain adaptation. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Abstract Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research. <s> BIB003 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. “Deep learning”, or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades. <s> BIB004 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Background ::: Deep learning convolutional neural networks (CNN) may facilitate melanoma detection, but data comparing a CNN's diagnostic performance to larger groups of dermatologists are lacking. ::: ::: ::: Methods ::: Google's Inception v4 CNN architecture was trained and validated using dermoscopic images and corresponding diagnoses. In a comparative cross-sectional reader study a 100-image test-set was used (level-I: dermoscopy only; level-II: dermoscopy plus clinical information and images). Main outcome measures were sensitivity, specificity and area under the curve (AUC) of receiver operating characteristics (ROC) for diagnostic classification (dichotomous) of lesions by the CNN versus an international group of 58 dermatologists during level-I or -II of the reader study. Secondary end points included the dermatologists' diagnostic performance in their management decisions and differences in the diagnostic performance of dermatologists during level-I and -II of the reader study. Additionally, the CNN's performance was compared with the top-five algorithms of the 2016 International Symposium on Biomedical Imaging (ISBI) challenge. ::: ::: ::: Results ::: In level-I dermatologists achieved a mean (±standard deviation) sensitivity and specificity for lesion classification of 86.6% (±9.3%) and 71.3% (±11.2%), respectively. More clinical information (level-II) improved the sensitivity to 88.9% (±9.6%, P = 0.19) and specificity to 75.7% (±11.7%, P < 0.05). The CNN ROC curve revealed a higher specificity of 82.5% when compared with dermatologists in level-I (71.3%, P < 0.01) and level-II (75.7%, P < 0.01) at their sensitivities of 86.6% and 88.9%, respectively. The CNN ROC AUC was greater than the mean ROC area of dermatologists (0.86 versus 0.79, P < 0.01). The CNN scored results close to the top three algorithms of the ISBI 2016 challenge. ::: ::: ::: Conclusions ::: For the first time we compared a CNN's diagnostic performance with a large international group of 58 dermatologists, including 30 experts. Most dermatologists were outperformed by the CNN. Irrespective of any physicians' experience, they may benefit from assistance by a CNN's image classification. ::: ::: ::: Clinical trial number ::: This study was registered at the German Clinical Trial Register (DRKS-Study-ID: DRKS00013570; https://www.drks.de/drks_web/). <s> BIB005 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Abstract What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging. <s> BIB006 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Convolutional neural network (CNN), a class of artificial neural networks that has become dominant in various computer vision tasks, is attracting interest across a variety of domains, including radiology. CNN is designed to automatically and adaptively learn spatial hierarchies of features through backpropagation by using multiple building blocks, such as convolution layers, pooling layers, and fully connected layers. This review article offers a perspective on the basic concepts of CNN and its application to various radiological tasks, and discusses its challenges and future directions in the field of radiology. Two challenges in applying CNN to radiological tasks, small dataset and overfitting, will also be covered in this article, as well as techniques to minimize them. Being familiar with the concepts and advantages, as well as limitations, of CNN is essential to leverage its potential in diagnostic radiology, with the goal of augmenting the performance of radiologists and improving patient care. KEY POINTS: • Convolutional neural network is a class of deep learning methods which has become dominant in various computer vision tasks and is attracting interest across a variety of domains, including radiology. • Convolutional neural network is composed of multiple building blocks, such as convolution layers, pooling layers, and fully connected layers, and is designed to automatically and adaptively learn spatial hierarchies of features through a backpropagation algorithm. • Familiarity with the concepts and advantages, as well as limitations, of convolutional neural network is essential to leverage its potential to improve radiologist performance and, eventually, patient care. <s> BIB007 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Introduction <s> Deep learning-based image segmentation is by now firmly established as a robust tool in image segmentation. It has been widely used to separate homogeneous areas as the first and critical component of diagnosis and treatment pipeline. In this article, we present a critical appraisal of popular methods that have employed deep-learning techniques for medical image segmentation. Moreover, we summarize the most common challenges incurred and suggest possible solutions. <s> BIB008
Medical imaging is a major pillar of clinical decision making and is an integral part of many patient journeys. Information extracted from medical images is clinically useful in many areas such as computer-aided detection, diagnosis, treatment planning, intervention and therapy. While medical imaging remains a vital component of a myriad of clinical tasks, an increasing shortage of qualified radiologists to interpret complex medical images suggests a clear need for reliable automated methods to alleviate the growing burden on health-care practitioners . In parallel, medical imaging sciences are benefiting from the development of novel computational techniques for the analysis of structured data like images. Development of algorithms for image acquisition, analysis and interpretation are driving innovation, particularly in the areas of registration, reconstruction, tracking, segmentation and modelling. Medical images are inherently difficult to interpret, requiring prior expertise to understand. Bio-medical images can be noisy and contain many modality-specific artefacts, acquired under a wide variety of acquisition conditions with different protocols. Thus, once trained models do not transfer seamlessly from one clinical task or site to another because of an often yawning domain gap BIB002 BIB001 . Su-pervised learning methods require extensive relabelling to regain initial performance in different workflows. The experience and prior knowledge required to work with such data means that there is often large inter-and intraobserver variability in annotating medical data. This not only raises questions about what constitutes a gold-standard ground truth annotation, but also results in disagreement of what that ground truth truly is. These issues result in a large cost associated with annotating and re-labelling of medical image datasets, as we require numerous expert annotators (oracles) to perform each annotation and to reach a consensus. In recent years, Deep Learning (DL) has emerged as the state-of-the-art technique for performing many medical image analysis tasks BIB003 BIB004 . Developments in the field of computer vision have shown great promise in transferring to medical image analysis, and several techniques have been shown to perform as accurately as human observers BIB005 . However, uptake of DL methods within the clinical practice has been limited thus far, largely due to the unique challenges of working with complex medical data, regulatory compliance issues and trust in trained models. We identify three key challenges when developing DL enabled applications for medical image analysis in a clinical set-arXiv:1910.02923v1 [cs. LG] 7 Oct 2019 ting: 1. Lack of Training Data: Supervised DL techniques traditionally rely on a large and even distribution of accurately annotated data points, and while more medical image datasets are becoming available, the time, cost and effort required to annotate such datasets remains significant. 2. The Final Percent: DL techniques have achieved state-ofthe-art performance for medical image analysis tasks, but in safety-critical domains even the smallest deviation can cause catastrophic results downstream. Achieving clinically credible output may require interactive interpretation of predictions (from an oracle) to be useful in practice. 3. Transparency and Interpretability: At present, most DL applications are considered to be a 'black-box' where the user has limited meaningful ways of interpreting, understanding or correcting how a model has made its prediction. Credence is a detrimental feature for medical applications as information from a wide variety of sources must be evaluated in order to make clinical decisions. Further indication of how a model has reached a predicted conclusion is needed in order to foster trust for DL enabled systems and allow users to weigh automated predictions appropriately. There is concerted effort in the medical image analysis research community to apply DL methods to various medical image analysis tasks, and these are showing great promise. We refer the reader to a number of reviews of DL in medical imaging BIB008 BIB006 BIB007 . These works primarily focus on the development of predictive models for a specific task and demonstrate state-of-the-art performance for that task. This review aims to give an overview of where humans will remain involved in this development, deployment and practical use of DL systems for medical image analysis. We focus on medical image segmentation techniques to explore the role of human end users in DL enabled systems. Automating segmentation tasks suffers from all of the drawbacks incurred by medical image data described above. There are many emerging techniques that seek to alleviate the added complexity of working with medical image data to perform automated segmentation of images. Segmentation seeks to divide an image into semantically meaningful regions (sets of pixels) in order to perform a number of downstream tasks, e.g. biometric measurements. Manually assigning a label to each pixel of an image is a laborious task and as such automated segmentation methods are important in practice. Advances in DL techniques such as Active Learning (AL) and Human-in-the-Loop computing applied to segmentation problems have shown progress in overcoming the key challenges outlined above and these are the studies this review focuses on. We categorise each study based on the nature of human interaction proposed and broadly divide them between which of the three key challenges they address. Section 2 introduces Active Learning, a branch of Machine Learning (ML) and Human-in-the-Loop Computing that seeks to find the most informative samples from an unlabelled distribution to be annotated next. By training on the most informative subset of samples, related work can achieve state-of-the-art performance while reducing the costly annotation burden associated with annotating medical image data. Section 3 evaluates techniques used to refine model predictions in response to user feedback, guiding models towards more accurate per-image predictions. We evaluate techniques that seek to improve interpretability of automated predictions and how models provide feedback on their own outputs to guide users towards better decision making. Section 4 evaluates the key practical considerations of developing and deploying Human-in-the-Loop DL enabled systems in practice and outlines the work being done in these areas that addresses the three key challenges identified above. These areas are human focused and assess how human end users might interact with these systems. Section 5 introduces related areas of ML and DL research that are having an impact on AL and Human-in-the-Loop Computing and are beginning to influence the three key challenges outlined. In Section 6 we offer our opinions on the future directions of Human-in-the-Loop DL research and how many of the techniques evaluated might be combined to work towards common goals.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Uncertainty <s> Abstract We propose an active learning approach to image segmentation that exploits geometric priors to speed up and streamline the annotation process. It can be applied for both background–foreground and multi-class segmentation tasks in 2D images and 3D image volumes. Our approach combines geometric smoothness priors in the image space with more traditional uncertainty measures to estimate which pixels or voxels are the most informative, and thus should to be annotated next. For multi-class settings, we additionally introduce two novel criteria for uncertainty. In the 3D case, we use the resulting uncertainty measure to select voxels lying on a planar patch, which makes batch annotation much more convenient for the end user compared to the setting where voxels are randomly distributed in a volume. The planar patch is found using a branch-and-bound algorithm that looks for a 2D patch in a 3D volume where the most informative instances are located. We evaluate our approach on Electron Microscopy and Magnetic Resonance image volumes, as well as on regular images of horses and faces. We demonstrate a substantial performance increase over other approaches thanks to the use of geometric priors. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Uncertainty <s> Recent successes in learning-based image classification, however, heavily rely on the large number of annotated training samples, which may require considerable human effort. In this paper, we propose a novel active learning (AL) framework, which is capable of building a competitive classifier with optimal feature representation via a limited amount of labeled training instances in an incremental learning manner. Our approach advances the existing AL methods in two aspects. First, we incorporate deep convolutional neural networks into AL. Through the properly designed framework, the feature representation and the classifier can be simultaneously updated with progressively annotated informative samples. Second, we present a cost-effective sample selection strategy to improve the classification performance with less manual annotations. Unlike traditional methods focusing on only the uncertain samples of low prediction confidence, we especially discover the large amount of high-confidence samples from the unlabeled set for feature learning. Specifically, these high-confidence samples are automatically selected and iteratively assigned pseudolabels. We thus call our framework cost-effective AL (CEAL) standing for the two advantages. Extensive experiments demonstrate that the proposed CEAL framework can achieve promising results on two challenging image classification data sets, i.e., face recognition on the cross-age celebrity face recognition data set database and object categorization on Caltech-256. <s> BIB002
The main family of informativeness measure falls into calculating uncertainty. It is argued that the more uncertain a prediction is, the more information we can gain by including the ground truth for that sample in the training set. There are several ways of calculating uncertainty from different ML/DL models. When considering DL for segmentation the most simple measure is the sum of lowest class probability for each pixel in a given image segmentation. It is argued that more certain predictions will have high pixel-wise class probabilities, so the lower the sum of the minimum class probability over each pixel in an image, the more certain a prediction is considered to be -this is a fairly intuitive way of thinking about uncertainty and offers a means to rank uncertainty of samples within a distribution. We refer to the method above as least confident sampling where the samples with the highest uncertainty are selected for labelling . A drawback of least confident sampling is that it only considers information about the most probable label, and discards the information about the remaining label distribution. Two alternative methods have been proposed that alleviate this concern. The first, called margin sampling , can be used in a multi-class setting and considers the first and second most probable labels under the model and calculates the difference between them. The intuition here is that the larger the margin is between the two most probable labels, the more confident the model is in assigning that label. The second, more popular approach is to use entropy as an uncertainty measure. For binary classification, entropy sampling is equivalent to least confident and margin sampling, but for multi-class problems entropy generalises well as an uncertainty measure. Using one of the above measures, un-annotated samples are ranked and the most 'uncertain' cases are chosen for the next round of annotation. BIB002 propose the Cost-Effective Active Learning (CEAL) method for deep image classification that involves complementary sampling in which the framework selects from an unlabelled data-set a) a set of uncertain samples to be labelled by an oracle, and b) a set of highly certain samples that are 'pseudo-labelled' by the framework and included in the labelled data-set. propose an active learning method that uses uncertainty sampling to support quality control of nucleus segmentation in pathology images. Their work compares the performance improvements achieved through active learning for three different families of algorithms: Support Vector Machines (SVM), Random Forest (RF) and Convolutional Neural Networks (CNN). They show that CNNs achieve the greatest accuracy, requiring significantly fewer iterations to achieve equivalent accuracy to the SVMs and RFs. Another common method of estimating informativeness is to measure the agreement between multiple models performing the same task. It is argued that more disagreement found between predictions on the same data point implies a higher level of uncertainty. These methods are referred to as Query by consensus and are generally applied when Ensembling is used to improve performance -i.e, training multiple models to perform the same task under slightly different parameters/settings . Ensembling methods have shown to measure informativeness well, but at the cost of computational resources -multiple models need to be trained and maintained, and each of these needs to be updated in the presence of newly selected training samples. Nevertheless, Beluch Bcai et al. (2018) demonstrate the power of ensembles for active learning and compare to alternatives to ensembling. They specifically compare the performance of acquisition functions and uncertainty estimation methods for active learning with CNNs for image classification tasks and show that ensemble based uncertainties outperform other methods of uncertainty estimation such as 'MC Dropout'. They find that the difference in active learning performance can be explained by a combination of decreased model capacity and lower diversity of MC dropout ensembles. A good performance is demonstrated on a diabetic retinopathy diagnosis task. introduce the use of Bayesian CNNs for Active Learning, and show that the use of Bayesian CNNs outperform deterministic CNNs in the context of Active Learning. Bayesian CNNs model uncertainty of predictions directly, and its argued this property allows them to outperform deterministic CNNs. In this work several different query strategies (or acquisition functions as they are referred to in the text) are used for Active Learning to demonstrate improved performance from fewer training samples than random sampling. They demonstrate their approach for skin cancer diagnosis from skin lesion images to show significant performance improvements over uniform sampling using the BALD method for sample selection, where the BALD method seeks to maximise the mutual information between predictions and model posterior. BIB001 propose an active learning approach that exploits geometric smoothness priors in the image space to aid the segmentation process. They use traditional uncertainty measures to estimate which pixels should be annotated next, and introduce novel criteria for uncertainty in multi-class settings. They exploit geometric uncertainty by estimating the entropy of the probability of supervoxels belonging to a class given the predictions of its neighbours and combine these to encourage selection of uncertain regions in areas of non-smooth transition between classes. They demonstrate state-of-the-art performance on mitochondria segmentation from EM images and on an MRI tumour segmentation task for both binary and multi-class segmentations. They suggest that exploiting geometric properties of images is useful to answer the questions of where to annotate next and by reducing 3D annotations to 2D annotations provide a possible answer to how to annotate the data, and that addressing both jointly can bring additional benefits to the annotation method, however they acknowledge that it would impossible to design bespoke selection strategies this way for every new task at hand.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Representativeness <s> Image segmentation is a fundamental problem in biomedical image analysis. Recent advances in deep learning have achieved promising results on many biomedical image segmentation benchmarks. However, due to large variations in biomedical images (different modalities, image settings, objects, noise, etc.), to utilize deep learning on a new application, it usually needs a new set of training data. This can incur a great deal of annotation effort and cost, because only biomedical experts can annotate effectively, and often there are too many instances in images (e.g., cells) to annotate. In this paper, we aim to address the following question: With limited effort (e.g., time) for annotation, what instances should be annotated in order to attain the best performance? We present a deep active learning framework that combines fully convolutional network (FCN) and active learning to significantly reduce annotation effort by making judicious suggestions on the most effective annotation areas. We utilize uncertainty and similarity information provided by FCN and formulate a generalized version of the maximum set cover problem to determine the most representative and uncertain areas for annotation. Extensive experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node ultrasound image segmentation dataset show that, using annotation suggestions by our method, state-of-the-art segmentation performance can be achieved by using only 50% of training data. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Representativeness <s> Segmentation is essential for medical image analysis tasks such as intervention planning, therapy guidance, diagnosis, treatment decisions. Deep learning is becoming increasingly prominent for segmentation, where the lack of annotations, however, often becomes the main limitation. Due to privacy concerns and ethical considerations, most medical datasets are created, curated, and allow access only locally. Furthermore, current deep learning methods are often suboptimal in translating anatomical knowledge between different medical imaging modalities. Active learning can be used to select an informed set of image samples to request for manual annotation, in order to best utilize the limited annotation time of clinical experts for optimal outcomes, which we focus on in this work. Our contributions herein are two fold: (1) we enforce domain-representativeness of selected samples using a proposed penalization scheme to maximize information at the network abstraction layer, and (2) we propose a Borda-count based sample querying scheme for selecting samples for segmentation. Comparative experiments with baseline approaches show that the samples queried with our proposed method, where both above contributions are combined, result in significantly improved segmentation performance for this active learning task. <s> BIB002
Many AL frameworks extend selection strategies to include some measure of representativeness in addition to an uncertainty measure. The intuition behind including a representativeness measure is that methods only concerned with uncertainty have the potential to focus only on small regions of the distribution, and that training on samples from the same area of the distribution will introduce redundancy to the selection strategy, or may skew the model towards a particular area of the distribution. The addition of a representativeness measure seeks to encourage selection strategies to sample from different areas of the distribution, thus improving AL performance. A sample with a high representativeness covers the information for many images in the same area of the distribution, so there is less need to include many samples covered by a representative image. To this end, BIB001 present Suggestive Annotation, a deep active learning framework for medical image segmentation, which uses an alternative formulation of uncertainty sampling combined with a form of representativeness density weighting. Their method consists of training multiple models that each exclude a portion of the training data, which are used to calculate an ensemble based uncertainty measure. They formulate choosing the most representative example as a generalised version of the maximum set-cover problem (NP Hard) and offer a greedy approach to selecting the most representative images using feature vectors from their models. They demonstrate state-of-the-art performance using 50% of the available data on the MICCAI Gland segmentation challenge and a lymph node segmentation task. propose MedAL, an active learning framework for medical image segmentation. They propose a sampling method that combines uncertainty, and distance between feature descriptors, to extract the most informative samples from an unlabelled data-set. Another contribution of this work is an approach which generates an initial training set by leveraging existing computer vision image descriptors to find the images that are most dissimilar to each other and thus cover a larger area of the image distribution. They show good results on three different medical image analysis tasks, achieving the baseline accuracy with less training data than random or pure uncertainty based methods. BIB002 propose a Borda-count based combination of an uncertainty and a representativeness measure to select the next batch of samples. Uncertainty is measured as the voxel-wise variance of N predictions using MC dropout in their model. They introduce new representativeness measures such as 'Content Distance', defined as the mean squared error between layer activation responses of a pre-trained classification network. They extend this contribution by encoding representativeness by maximum entropy to optimise network weights using an novel entropy loss function. propose a novel method for ensuring diversity among queried samples by calculating the Fisher Information (FI), for the first time in CNNs. Here, efficient computation is enabled by the gradient computations of propagation to allow FI to be calculated on the large parameter space of CNNs. They demonstrate the performance of their approach on two different flavours of task: a) semi-automatic segmentation of a particular subject (from a different group/different pathology not present in the original training data) where iteratively labelling small numbers of voxels queried by AL achieves accurate segmentation for that subject; and b) using AL to build a model generalisable to all images in a given data-set. They show that in both these scenarios the FI-based AL improves performance after labelling a small percentage of voxels, outperformed random sampling and achieved higher accuracy than than entropy based querying.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Learning Active Learning <s> In this paper, we suggest a novel data-driven approach to active learning (AL). The key idea is to train a regressor that predicts the expected error reduction for a candidate sample in a particular learning state. By formulating the query selection procedure as a regression problem we are not restricted to working with existing AL heuristics; instead, we learn strategies based on experience from previous AL outcomes. We show that a strategy can be learnt either from simple synthetic 2D datasets or from a subset of domain-specific data. Our method yields strategies that work well on real data from a wide range of domains. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Learning Active Learning <s> We introduce a model that learns active learning algorithms via metalearning. For a distribution of related tasks, our model jointly learns: a data representation, an item selection heuristic, and a method for constructing prediction functions from labeled training sets. Our model uses the item selection heuristic to gather labeled training sets from which to construct prediction functions. Using the Omniglot and MovieLens datasets, we test our model in synthetic and practical settings. <s> BIB002
The methods discussed so far are all hand designed heuristics of informativeness, but some works have emerged that attempt to learn what the most informative samples are through experience of previous sample selection outcomes. This offers a potential way to select samples more efficiently but at the cost of interpretability of the heuristics employed. Many factors influence the performance and optimality of using hand-crafted heuristics for data selection. BIB001 propose 'Learning Active Learning', where a regression model learns data selection strategies based on experience from previous AL outcomes. Arguing there is no way to foresee the influence of all factors such as class imbalance, label noise, outliers and distribution shape. Instead, their regression model 'adapts' its selection to the problem without explicitly stating specific rules. BIB002 take this idea a step further and propose a model that leverages labelled instances from different but related tasks to learn a selection strategy, while simultaneously adapting its representation of the data and its prediction function.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Interpretability <s> Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process — incorporating these networks into mission critical processes such as medical diagnosis, planning and control — requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Interpretability <s> Digital pathology is not only one of the most promising fields of diagnostic medicine, but at the same time a hot topic for fundamental research. Digital pathology is not just the transfer of histopathological slides into digital representations. The combination of different data sources (images, patient records, and *omics data) together with current advances in artificial intelligence/machine learning enable to make novel information accessible and quantifiable to a human expert, which is not yet available and not exploited in current medical settings. The grand goal is to reach a level of usable intelligence to understand the data in the context of an application task, thereby making machine decisions transparent, interpretable and explainable. The foundation of such an "augmented pathologist" needs an integrated approach: While machine learning algorithms require many thousands of training examples, a human expert is often confronted with only a few data points. Interestingly, humans can learn from such few examples and are able to instantly interpret complex patterns. Consequently, the grand goal is to combine the possibilities of artificial intelligence with human intelligence and to find a well-suited balance between them to enable what neither of them could do on their own. This can raise the quality of education, diagnosis, prognosis and prediction of cancer and other diseases. In this paper we describe some (incomplete) research issues which we believe should be addressed in an integrated and concerted effort for paving the way towards the augmented pathologist. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Interpretability <s> Despite the state-of-the-art performance for medical image segmentation, deep convolutional neural networks (CNNs) have rarely provided uncertainty estimations regarding their segmentation outputs, e.g., model (epistemic) and image-based (aleatoric) uncertainties. In this work, we analyze these different types of uncertainties for CNN-based 2D and 3D medical image segmentation tasks. We additionally propose a test-time augmentation-based aleatoric uncertainty to analyze the effect of different transformations of the input image on the segmentation output. Test-time augmentation has been previously used to improve segmentation accuracy, yet not been formulated in a consistent mathematical framework. Hence, we also propose a theoretical formulation of test-time augmentation, where a distribution of the prediction is estimated by Monte Carlo simulation with prior distributions of parameters in an image acquisition model that involves image transformations and noise. We compare and combine our proposed aleatoric uncertainty with model uncertainty. Experiments with segmentation of fetal brains and brain tumors from 2D and 3D Magnetic Resonance Images (MRI) showed that 1) the test-time augmentation-based aleatoric uncertainty provides a better uncertainty estimation than calculating the test-time dropout-based model uncertainty alone and helps to reduce overconfident incorrect predictions, and 2) our test-time augmentation outperforms a single-prediction baseline and dropout-based multiple predictions. <s> BIB003 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Interpretability <s> Manual estimation of fetal Head Circumference (HC) from Ultrasound (US) is a key biometric for monitoring the healthy development of fetuses. Unfortunately, such measurements are subject to large inter-observer variability, resulting in low early-detection rates of fetal abnormalities. To address this issue, we propose a novel probabilistic Deep Learning approach for real-time automated estimation of fetal HC. This system feeds back statistics on measurement robustness to inform users how confident a deep neural network is in evaluating suitable views acquired during free-hand ultrasound examination. In real-time scenarios, this approach may be exploited to guide operators to scan planes that are as close as possible to the underlying distribution of training images, for the purpose of improving inter-operator consistency. We train on free-hand ultrasound data from over 2000 subjects (2848 training/540 test) and show that our method is able to predict HC measurements within 1.81$\pm$1.65mm deviation from the ground truth, with 50% of the test images fully contained within the predicted confidence margins, and an average of 1.82$\pm$1.78mm deviation from the margin for the remaining cases that are not fully contained. <s> BIB004
While DL methods have become a standard state-of-the-art approach for many medical image analysis tasks, they largely remain black-box methods where the end user has limited meaningful ways of interpreting model predictions. This feature of DL methods is a significant hurdle in the deployment of DL enabled applications to safety-critical domains such as medical image analysis. We want models to be highly accurate and robust, but also explainable and interpretable. Recent EU law 1 has led to the 'right for explanation', whereby any subject has the right to have automated decisions that have been made about them explained. This even further highlights the need for transparent algorithms which we can reason about [Goodman and Flaxman (2016), , ]. It is important for users to understand how a certain decision has been made by the model, as even the most accurate and robust models aren't infallible, and false or uncertain predictions must be identified so that trust in the model can be fostered and predictions are appropriately weighted in the clinical decision making process. It is vital the end user, regulators and auditors all have the ability to contextualise automated decisions produced by DL models. Here we outline some different methods for providing interpretable ways of reasoning about DL models and their predictions. Typically DL methods can provide statistical metrics on the uncertainty of a model output, many of the uncertainty measures discussed in Section 2 are also used to aid in intepretability. While uncertainty measures are important, these are not sufficient to foster complete trust in DL model, the model should provide human-understandable justifications for its output that allow insights to be drawn elucidating the inner workings of a model. BIB001 discuss many of the core concerns surrounding model intepretability and highlight various works that have demonstrated more sophisticated methods of making a DL model interpretable across the DL field. Here we evaluate some of the works that have been applied to medical image segmentation and refer the reader to , BIB002 ] for further reading on interpretability in the rest of the medical imaging domain. Oktay et al. (2018) introduce 'Attention Gating' to guide networks towards giving more 'attention' to certain image areas, in a visually interpretable way -potentially aiding in the subsequent refinement of annotations. explore different uncertainty estimates for a U-Net based cardiac MRI segmentation in order to detect inaccurate segmentations, as having the ability to know when a segmentation is less accurate can be useful to reduce down stream errors, and demonstrate that by setting a threshold on the quality of segmentations we can remove poor segmentations for manual correction. In BIB004 we propose a visual method for interpreting automated head circumference measurements from ultrasound images, using MC Dropout at test-time to acquire N head segmentations to calculate an upper and lower bound on the head circumference measurement in real-time. These bounds were displayed over the image to guide the sonographer towards views in which the model predicts with the most confidence. This upper lower bound is presented as a measure of model compliance of the unseen image rather than uncertainty. Finally, variance heuristics are proposed to quantify the confidence of a prediction in order to either accept or reject head circumference measurements, and it is shown these can improve overall performance measures once 'rejected' images are removed. BIB003 propose using test-time augmentation to acquire a measure of aleatoric (image-based) uncertainty and compare their method with epistemic (model) uncertainty measures and show that their method provides a better uncertainty estimation than a test-time dropout based model uncertainty alone and reduces overconfident incorrect predictions. propose a novel interpretation method for histological Whole Slide image processing by combing a deep neural network with a Multiple instance Learning branch to enhance the models expressive power without guiding its attention. A logit heat-map of model activations is presented, in order to interpret its decision making process. Two expert pathologists provided feedback that the interpretability of the method has potential for integration into several clinical applications. Jungo and Reyes (2019) evaluate several different voxelwise uncertainty estimation methods applied to medical image segmentation with respect to their reliability and limitations and show that current uncertainty estimation methods perform similarly. Their results show that while uncertainty estimates may be well calibrated at the dataset level (capturing epistemic uncertainty), they tend to be mis-calibrated at a subject-level (aleatoric uncertainty). This compromises the reliability of these uncertainty estimates and highlights the need to develop subject-wise uncertainty estimates. They show auxiliary networks to be a valid alternative to common uncertainty methods 6 as they can be applied to any previously trained segmentation model. Developing transparent systems will enable faster uptake in clinical practice and including humans within the deep learning clinical pipelines will ease the period of transition between current best practices and the breadth of possible enhancements that deep learning has to offer. We suggest that ongoing work in improving interpretability of DL models will also have a positive impact on AL, as the majority of methods to improve intepretability are centred on providing uncertainty measures for a models prediction, these same uncertainty measures can be used for AL selection strategies in place of existing uncertainty measures that are currently employed. As intepretability and uncertainty measures improve we expect to see a similar improvement of AL frameworks as they incorporate the most promising uncertainty measures.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net . <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> For complex segmentation tasks, fully automatic systems are inherently limited in their achievable accuracy for extracting relevant objects. Especially in cases where only few data sets need to be processed for a highly accurate result, semi-automatic segmentation techniques exhibit a clear benefit for the user. One area of application is medical image processing during an intervention for a single patient. We propose a learning-based cooperative segmentation approach which includes the computing entity as well as the user into the task. Our system builds upon a state-of-the-art fully convolutional artificial neural network (FCN) as well as an active user model for training. During the segmentation process, a user of the trained system can iteratively add additional hints in form of pictorial scribbles as seed points into the FCN system to achieve an interactive and precise segmentation result. The segmentation quality of interactive FCNs is evaluated. Iterative FCN approaches can yield superior results compared to networks without the user input channel component, due to a consistent improvement in segmentation quality after each interaction. <s> BIB003 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> Accurate medical image segmentation is essential for diagnosis, surgical planning and many other applications. Convolutional Neural Networks (CNNs) have become the state-of-the-art automatic segmentation methods. However, fully automatic results may still need to be refined to become accurate and robust enough for clinical use. We propose a deep learning-based interactive segmentation method to improve the results obtained by an automatic CNN and to reduce user interactions during refinement for higher accuracy. We use one CNN to obtain an initial automatic segmentation, on which user interactions are added to indicate mis-segmentations. Another CNN takes as input the user interactions with the initial segmentation and gives a refined result. We propose to combine user interactions with CNNs through geodesic distance transforms, and propose a resolution-preserving network that gives a better dense prediction. In addition, we integrate user interactions as hard constraints into a back-propagatable Conditional Random Field. We validated the proposed framework in the context of 2D placenta segmentation from fetal MRI and 3D brain tumor segmentation from FLAIR images. Experimental results show our method achieves a large improvement from automatic CNNs, and obtains comparable and even higher accuracy with fewer user interventions and less time compared with traditional interactive methods. <s> BIB004 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> Measuring airways in chest computed tomography (CT) images is important for characterizing diseases such as cystic fibrosis, yet very time-consuming to perform manually. Machine learning algorithms offer an alternative, but need large sets of annotated data to perform well. We investigate whether crowdsourcing can be used to gather airway annotations which can serve directly for measuring the airways, or as training data for the algorithms. We generate image slices at known locations of airways and request untrained crowd workers to outline the airway lumen and airway wall. Our results show that the workers are able to interpret the images, but that the instructions are too complex, leading to many unusable annotations. After excluding unusable annotations, quantitative results show medium to high correlations with expert measurements of the airways. Based on this positive experience, we describe a number of further research directions and provide insight into the challenges of crowdsourcing in medical images from the perspective of first-time users. <s> BIB005 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes. To address these problems, we propose a novel deep learning-based framework for interactive segmentation by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine-tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised (without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine-tuning. We applied this framework to two applications: 2D segmentation of multiple organs from fetal MR slices, where only two types of these organs were annotated for training; and 3D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only tumor cores in one MR sequence were annotated for training. Experimental results show that 1) our model is more robust to segment previously unseen objects than state-of-the-art CNNs; 2) image-specific fine-tuning with the proposed weighted loss function significantly improves segmentation accuracy; and 3) our method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods. <s> BIB006 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> Segmentation is one of the most important parts of medical image analysis. Manual segmentation is very cumbersome, time-consuming, and prone to inter-observer variability. Fully automatic segmentation approaches require a large amount of labeled training data and may fail in difficult or abnormal cases. In this work, we propose a new method for 2D segmentation of individual slices and 3D interpolation of the segmented slices. The Smart Brush functionality quickly segments the region of interest in a few 2D slices. Given these annotated slices, our adapted formulation of Hermite radial basis functions reconstructs the 3D surface. Effective interactions with less number of equations accelerate the performance and, therefore, a real-time and an intuitive, interactive segmentation of 3D objects can be supported effectively. The proposed method is evaluated on 12 clinical 3D magnetic resonance imaging data sets and are compared to gold standard annotations of the left ventricle from a clinical expert. The automatic evaluation of the 2D Smart Brush resulted in an average Dice coefficient of 0.88 ± 0.09 for the individual slices. For the 3D interpolation using Hermite radial basis functions, an average Dice coefficient of 0.94 ± 0.02 is achieved. <s> BIB007 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> Automatic segmentation has great potential to facilitate morphological measurements while simultaneously increasing efficiency. Nevertheless often users want to edit the segmentation to their own needs and will need different tools for this. There has been methods developed to edit segmentations of automatic methods based on the user input, primarily for binary segmentations. Here however, we present an unique training strategy for convolutional neural networks (CNNs) trained on top of an automatic method to enable interactive segmentation editing that is not limited to binary segmentation. By utilizing a robot-user during training, we closely mimic realistic use cases to achieve optimal editing performance. In addition, we show that an increase of the iterative interactions during the training process up to ten improves the segmentation editing performance substantially. Furthermore, we compare our segmentation editing CNN (interCNN) to state-of-the-art interactive segmentation algorithms and show a superior or on par performance. <s> BIB008 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Refinement <s> An interactive image segmentation algorithm, which accepts user-annotations about a target object and the background, is proposed in this work. We convert user-annotations into interaction maps by measuring distances of each pixel to the annotated locations. Then, we perform the forward pass in a convolutional neural network, which outputs an initial segmentation map. However, the user-annotated locations can be mislabeled in the initial result. Therefore, we develop the backpropagating refinement scheme (BRS), which corrects the mislabeled pixels. Experimental results demonstrate that the proposed algorithm outperforms the conventional algorithms on four challenging datasets. Furthermore, we demonstrate the generality and applicability of BRS in other computer vision tasks, by transforming existing convolutional neural networks into user-interactive ones. <s> BIB009
If we can develop accurate, robust and interpretable models for medical image applications we still cannot guarantee clinical grade accuracy for every unseen data-point presented to a model. The ability to generalise to unseen input is a cornerstone of deep learning applications, but in real world distributions, generalisation is rarely perfect. As such, methods to rectify these discrepancies must be built into applications used for medical image analysis. This iterative refinement must save the end user time and mental effort over performing manual annotation. Many interactive image segmentation systems have been proposed, and more recently these have built on the advances in deep learning to allow users to refine model outputs and feedback the more accurate results to the model for improvement. BIB003 introduced UI-Net, that builds on the popular U-Net architecture for medical image segmentation BIB001 . The UI-Net is trained with an active user model, and allows for users to interact with proposed segmentations by providing scribbles over the image to indicate areas that should be included or not, the network is trained using simulated user interactions and as such responds to iterative user scribbles to refine a segmentation towards a more accurate result. Conditional Random fields have been used in various tasks to encourage segmentation homogeneity. BIB002 propose CRF-CNN, a recurrent neural network which has the desirable properties of both CNNs and CRFs. BIB004 propose DeepIGeoS, an interactive geodesic framework for medical image segmentation. This framework uses two CNNs, the first performs an initial automatic segmentation, and the second takes the initial segmentation as well as user interactions with the initial segmentation to provide a refined result. They combine user interactions with CNNs through geodesic distance transforms BIB005 , and these user interactions are integrated as hard constraints into a Conditional Random Field, inspired by BIB002 . They call their two networks P-Net (initial segmentation) and R-Net (for refinement). They demonstrate superior results for segmentation of the placenta from 2D fetal MRI and brain tumors from 3D FLAIR images when compared to fully automatic CNNs. These segmentation results were also obtained in roughly a third of the time taken to perform the same segmentation with traditional interactive methods such as GeoS or ITK-SNAP. Graph Cuts have also been used in segmentation to incorporate user interaction -a user provides seed points to the algorithm (e.g. mark some pixel as foreground, and another as background) and from this the segmentation is calculated. BIB006 propose BIFSeg, an interactive segmentation framework inspired by graph cuts. Their work introduces a deep learning framework for interactive segmentation by combining CNNs with a bounding box and scribble based segmentation pipeline. The user provides a bounding box around the area which they are interested in segmenting, this is then fed into their CNN to produce an initial segmentation prediction, the user can then provide scribbles to mark areas of the image as mis-classified -these user inputs are then weighted heavily in the calculation of the refined segmentation using their graph cut based algorithm. BIB008 propose an alternative to BIFSeg in which two networks are trained, one to perform an initial segmentation (they use a CNN but this initial segmentation could be performed with any existing algorithm) and a second network they call interCNN that takes as input the image, some user scribbles and the initial segmentation prediction and outputs a refined segmentation, they show that with several iterations over multiple user inputs the quality of the segmentations improve over the initial segmentation and achieve state-of-theart performance in comparison to other interactive methods. The methods discussed above have so far been concerned with producing segmentations for individual images or slices, however many segmentation tasks seek to extract the 3D shape/surface of a particular region of interest (ROI). BIB007 propose a dual method for producing segmentations in 3D based on a Smart-brush 2D segmentation that the user guides towards a good 2D segmentation, and after a few slices are segmented this is transformed to a 3D surface shape using Hermite radial basis functions, achieving high accuracy. While this method does not use deep learning it is a strong example of the ways in which interactive segmentation can be used to generate high quality training data for use in deep learning applications -their approach is general and can produce segmentations for a large number of tasks. There is potential to incorporate deep learning into their pipeline to improve results and accelerate the interactive annotation process. BIB009 propose an interactive segmentation scheme that generalises to any previously trained segmentation model, which accepts user annotations about a target object and the background. User annotations are converted into interaction maps by measuring the distance of each pixel to the annotated landmarks, after which the forward pass outputs an initial segmentation. The user annotated points can be mis-segmented in the initial segmentation so they propose BRS (back-propogating refinement scheme) that corrects the mis-labelled pixels. They demonstrate that their algorithm outperforms conventional approaches on several datasets and that BRS can generalise to medical image segmentation tasks by transforming existing CNNs into user-interactive versions. In this section we focus on applications concerned with iteratively refining a segmentation towards a desired quality of output. In the scenarios above this is performed on an un-seen image provided by the end user, but there is no reason the same approach could be taken to generate iteratively more accurate annotations to be used in training, e.g., using active learning to select which samples to annotate next, and iteratively refining the prediction made by the current model until a sufficiently accurate annotation is curated. This has the potential to accelerate annotation for training without any additional implementation overhead. Much work done in AL ignores the role of the oracle and merely assumes we can acquire an accurate label when we need it, but in practice this presents a more significant challenge. We foresee AL and HITL computing become more tightly coupled as AL research improves it's consideration for the oracle providing the annotations.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Noisy Oracles <s> Multi-label active learning is a hot topic in reducing the label cost by optimally choosing the most valuable instance to query its label from an oracle. In this paper, we consider the poolbased multi-label active learning under the crowdsourcing setting, where during the active query process, instead of resorting to a high cost oracle for the ground-truth, multiple low cost imperfect annotators with various expertise are available for labeling. To deal with this problem, we propose the MAC (Multi-label Active learning from Crowds) approach which incorporate the local influence of label correlations to build a probabilistic model over the multi-label classifier and annotators. Based on this model, we can estimate the labels for instances as well as the expertise of each annotator. Then we propose the instance selection and annotator selection criteria that consider the uncertainty/diversity of instances and the reliability of annotators, such that the most reliable annotator will be queried for the most valuable instances. Experimental results demonstrate the effectiveness of the proposed approach. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Noisy Oracles <s> An active learner is given a hypothesis class, a large set of unlabeled examples and the ability to interactively query labels to an oracle of a subset of these examples; the goal of the learner is to learn a hypothesis in the class that fits the data well by making as few label queries as possible. ::: This work addresses active learning with labels obtained from strong and weak labelers, where in addition to the standard active learning setting, we have an extra weak labeler which may occasionally provide incorrect labels. An example is learning to classify medical images where either expensive labels may be obtained from a physician (oracle or strong labeler), or cheaper but occasionally incorrect labels may be obtained from a medical resident (weak labeler). Our goal is to learn a classifier with low error on data labeled by the oracle, while using the weak labeler to reduce the number of label queries made to this labeler. We provide an active learning algorithm for this setting, establish its statistical consistency, and analyze its label complexity to characterize when it can provide label savings over using the strong labeler alone. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Noisy Oracles <s> Measuring airways in chest computed tomography (CT) images is important for characterizing diseases such as cystic fibrosis, yet very time-consuming to perform manually. Machine learning algorithms offer an alternative, but need large sets of annotated data to perform well. We investigate whether crowdsourcing can be used to gather airway annotations which can serve directly for measuring the airways, or as training data for the algorithms. We generate image slices at known locations of airways and request untrained crowd workers to outline the airway lumen and airway wall. Our results show that the workers are able to interpret the images, but that the instructions are too complex, leading to many unusable annotations. After excluding unusable annotations, quantitative results show medium to high correlations with expert measurements of the airways. Based on this positive experience, we describe a number of further research directions and provide insight into the challenges of crowdsourcing in medical images from the perspective of first-time users. <s> BIB003 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Noisy Oracles <s> Over the last few years, deep learning has revolutionized the field of machine learning by dramatically improving the state-of-the-art in various domains. However, as the size of supervised artificial neural networks grows, typically so does the need for larger labeled datasets. Recently, crowdsourcing has established itself as an efficient and cost-effective solution for labeling large sets of data in a scalable manner, but it often requires aggregating labels from multiple noisy contributors with different levels of expertise. In this paper, we address the problem of learning deep neural networks from crowds. We begin by describing an EM algorithm for jointly learning the parameters of the network and the confusion matrices of the different annotators for classification settings. Then, a novel general-purpose crowd layer is proposed, which allows us to train deep neural networks end-to-end, directly from the noisy labels of multiple annotators, using backpropagation. We empirically show that the proposed approach is able to internally capture the reliability and biases of different annotators and achieve new state-of-the-art results for various crowdsourced datasets across different settings, namely classification, regression and sequence labeling. <s> BIB004
Gold-standard annotations for medical image data are acquired by aggregating annotations from multiple expert oracles, but as previously discussed, this is rarely feasible to obtain for large complex datasets due to the expertise required to perform such annotations. Here we ask what effect on performance we might incur if we acquire labels from oracles without domain expertise, and what techniques can we use to mitigate the suspected degradation of annotation quality when using non-expert oracles, to avoid any potential loss in accuracy. BIB001 and BIB002 propose active learning methods that assume data will be annotated by a crowd of non-expert or 'weak' annotators, and offer approaches to mitigate the introduction of bad labels into the data set. They simultaneously learn about the quality of individual annotators so that the most informative examples can be labelled by the strongest annotators. BIB003 explore using Amazon's MTurk to gather annotations of airways in CT images. Results showed that the novice oracles were able to interpret the images, but that instructions provided were too complex, leading to many unusable annotations. Once the bad annotations were removed, the annotations did show medium to high correlation with expert annotations, especially if annotations were aggregated. BIB004 describe an approach to assess the reliability of annotators in a crowd, and a crowd layer used to train deep models from noisy labels from multiple annotators, internally capturing the reliability and biases of different annotators to achieve state-of-the-art results for several crowdsourced data-set tasks. We can see that by using a learned model of oracle annotation quality we can mitigate the effects of low quality annotations and present the most challenging cases to most capable oracles. By providing clear instructions we can lower the barriers for non-expert oracles to perform accurate annotation, but this is not generalisable and would be required for every new annotation task we wish to perform.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Alternative Query Types <s> The availability of training data for supervision is a frequently encountered bottleneck of medical image analysis methods. While typically established by a clinical expert rater, the increase in acquired imaging data renders traditional pixel-wise segmentations less feasible. In this paper, we examine the use of a crowdsourcing platform for the distribution of super-pixel weak annotation tasks and collect such annotations from a crowd of non-expert raters. The crowd annotations are subsequently used for training a fully convolutional neural network to address the problem of fetal brain segmentation in T2-weighted MR images. Using this approach we report encouraging results compared to highly targeted, fully supervised methods and potentially address a frequent problem impeding image analysis research. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Alternative Query Types <s> In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled with bounding box annotations. It extends the approach of the well-known GrabCut method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naive approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Alternative Query Types <s> To efficiently establish training databases for machine learning methods, collaborative and crowdsourcing platforms have been investigated to collectively tackle the annotation effort. However, when this concept is ported to the medical imaging domain, reading expertise will have a direct impact on the annotation accuracy. In this study, we examine the impact of expertise and the amount of available annotations on the accuracy outcome of a liver segmentation problem in an abdominal computed tomography (CT) image database. In controlled experiments, we study this impact for different types of weak annotations. To address the decrease in accuracy associated with lower expertise, we propose a method for outlier correction making use of a weakly labelled atlas. Using this approach, we demonstrate that weak annotations subject to high error rates can achieve a similarly high accuracy as state-of-the-art multi-atlas segmentation approaches relying on a large amount of expert manual segmentations. Annotations of this nature can realistically be obtained from a non-expert crowd and can potentially enable crowdsourcing of weak annotation tasks for medical image analysis. <s> BIB003
Most segmentation tasks require pixel-wise annotations, but these are not the only type of annotation we can give an image. Segmentation can be performed with 'weak' annotations, which include image level labels e.g. modality, organs present etc. and annotations such as bounding boxes, ellipses or scribbles. It is argued that using 'weaker' annotation formulations can make the task easier for the human oracle, leading to more accurate annotations. 'Weak' annotations have been shown to perform well in several segmentation tasks, BIB002 demonstrate obtaining pixel-wise segmentations given a data-set of images with 'weak' bounding box annotations. They propose DeepCut, an architecture that combines a CNN with an iterative dense CRF formulation to achieve good accuracy while greatly reducing annotation effort required. In a later study, BIB003 examine the impact of expertise required for different 'weak' annotation types on the accuracy of liver segmentations. The results showed a decrease in accuracy with less expertise, as expected, across all annotation types. Despite this, segmentation accuracy was comparable to state-of-the-art performance when using a weakly labelled atlas for outlier correction. The robust performance of their approach suggests 'weak' annotations from non-expert crowds could be used to obtain accurate segmentations on many different tasks, however their use of an atlas makes this approach less generalisable than is desired. In BIB001 they examine using super pixels to accelerate the annotation process. This approach uses a pre-processing step to acquire a super-pixel segmentation of each image, non-experts are then used to perform the annotation by selecting which super-pixels are part of the target region. Results showed that the approach largely reduces the annotation load on users. Non-expert annotation of 5000 slices was completed in under an hour by 12 annotators, compared to an expert taking three working days to establish the same with an advanced interface. The non-expert interface is web-based demonstrating the potential of distributed annotation collection/crowd-sourcing. An encouraging aspect of this paper is that the results showed high performance on the segmentation task in question compared with expert annotation performance, but may not be suitable for all medical image analysis tasks. It has been shown that we can develop high performing models using weakly annotated data, and as weak annotations requires less expertise to perform, they can be acquired faster and from a non-expert crowd with a smaller loss in accuracy than gold-standard annotations. This is very promising for future research as datasets of weakly annotated data might be much easier and more cost-effective to curate.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Annotation Interface <s> Motion detection by the retina is thought to rely largely on the biophysics of starburst amacrine cell dendrites; here machine learning is used with gamified crowdsourcing to draw the wiring diagram involving amacrine and bipolar cells to identify a plausible circuit mechanism for direction selectivity; the model suggests similarities between mammalian and insect vision. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Annotation Interface <s> Recent advances in microscopy imaging and genomics have created an explosion of patient data in the pathology domain. Whole-slide images (WSIs) of tissues can now capture disease processes as they unfold in high resolution, recording the visual cues that have been the basis of pathologic diagnosis for over a century. Each WSI contains billions of pixels and up to a million or more microanatomic objects whose appearances hold important prognostic information. Computational image analysis enables the mining of massive WSI datasets to extract quantitative morphologic features describing the visual qualities of patient tissues. When combined with genomic and clinical variables, this quantitative information provides scientists and clinicians with insights into disease biology and patient outcomes. To facilitate interaction with this rich resource, we have developed a web-based machine-learning framework that enables users to rapidly build classifiers using an intuitive active learning process that minimizes data labeling effort. In this paper we describe the architecture and design of this system, and demonstrate its effectiveness through quantification of glioma brain tumors. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Annotation Interface <s> In this study, we developed a novel system, called Gaze2Segment, integrating biological and computer vision techniques to support radiologists’ reading experience with an automatic image segmentation task. During diagnostic assessment of lung CT scans, the radiologists’ gaze information were used to create a visual attention map. Next, this map was combined with a computer-derived saliency map, extracted from the gray-scale CT images. The visual attention map was used as an input for indicating roughly the location of a region of interest. With computer-derived saliency information, on the other hand, we aimed at finding foreground and background cues for the object of interest found in the previous step. These cues are used to initiate a seed-based delineation process. The proposed Gaze2Segment achieved a dice similarity coefficient of 86% and Hausdorff distance of 1.45 mm as a segmentation accuracy. To the best of our knowledge, Gaze2Segment is the first true integration of eye-tracking technology into a medical image segmentation task without the need for any further user-interaction. <s> BIB003 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Annotation Interface <s> Deep learning with convolutional neural networks (CNNs) has experienced tremendous growth in multiple healthcare applications and has been shown to have high accuracy in semantic segmentation of medical (e.g., radiology and pathology) images. However, a key barrier in the required training of CNNs is obtaining large-scale and precisely annotated imaging data. We sought to address the lack of annotated data with eye tracking technology. As a proof of principle, our hypothesis was that segmentation masks generated with the help of eye tracking (ET) would be very similar to those rendered by hand annotation (HA). Additionally, our goal was to show that a CNN trained on ET masks would be equivalent to one trained on HA masks, the latter being the current standard approach. Step 1: Screen captures of 19 publicly available radiologic images of assorted structures within various modalities were analyzed. ET and HA masks for all regions of interest (ROIs) were generated from these image datasets. Step 2: Utilizing a similar approach, ET and HA masks for 356 publicly available T1-weighted postcontrast meningioma images were generated. Three hundred six of these image + mask pairs were used to train a CNN with U-net-based architecture. The remaining 50 images were used as the independent test set. Step 1: ET and HA masks for the nonneurological images had an average Dice similarity coefficient (DSC) of 0.86 between each other. Step 2: Meningioma ET and HA masks had an average DSC of 0.85 between each other. After separate training using both approaches, the ET approach performed virtually identically to HA on the test set of 50 images. The former had an area under the curve (AUC) of 0.88, while the latter had AUC of 0.87. ET and HA predictions had trimmed mean DSCs compared to the original HA maps of 0.73 and 0.74, respectively. These trimmed DSCs between ET and HA were found to be statistically equivalent with a p value of 0.015. We have demonstrated that ET can create segmentation masks suitable for deep learning semantic segmentation. Future work will integrate ET to produce masks in a faster, more natural manner that distracts less from typical radiology clinical workflow. <s> BIB004
So far the majority of Human-in-the-loop methods assume a significant level of interaction from an oracle to annotate data and model predictions, but few consider the nature of the interface with which an oracle might interact with these images. The nature of medical images require special attention when proposing distributed online platforms to perform such annotations. While the majority of techniques discussed so far have used pre-existing data labels in place of newly acquired to demonstrate their performance, it is important to consider the effects of accuracy of annotation that the actual interface might incur. BIB002 propose a framework for the online classification of Whole-slide images (WSIs) of tissues. Their interface enables users to rapidly build classifiers using an active learning process that minimises labelling efforts and demonstrates the effectiveness of their solution for the quantification of glioma brain tumours. BIB003 propose a novel interface for the segmentation of images that tracks the users gaze to initiate seed points for the segmentation of the object of interest as the only means of interaction with the image, achieving high segmentation performance. BIB004 extend this idea and compare using eye tracking generated training samples to traditional hand annotated training samples for training a DL model. They show that almost equivalent performance was achieved using annotation generated through eye tracking, and suggest that this approach might be applicable to rapidly generate training data. They acknowledge that there is still improvements to be made integrate eye tracking into typical clinical radiology work flow in a faster, more natural and less distracting way. evaluate the player motivations behind EyeWire, an online game that asks a crowd of players to help segment neurons in a mouse brain. The gamification of this task has seen over 500,000 players sign up and the segmentations acquired have gone onto be used in several research works BIB001 ]. One of the most exciting things about gamification is that when surveyed, users were motivated most by making a scientific contribution rather than any potential monetary reward. However this is very specialised towards this particular task and would be difficult to apply across other types of medical image analysis task. There are many different approaches to developing annotation interfaces and the ones we consider above are just a few that have been applied to medical image analysis. As development increases we expect to see more online tools being used for medical image analysis and the chosen format of the interface will play a large part in the usability and overall success of these applications.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Variable Learning Costs <s> Deep learning for clinical applications is subject to stringent performance requirements, which raises a need for large labeled datasets. However, the enormous cost of labeling medical data makes this challenging. In this paper, we build a cost-sensitive active learning system for the problem of intracranial hemorrhage detection and segmentation on head computed tomography (CT). We show that our ensemble method compares favorably with the state-of-the-art, while running faster and using less memory. Moreover, our experiments are done using a substantially larger dataset than earlier papers on this topic. Since the labeling time could vary tremendously across examples, we model the labeling time and optimize the return on investment. We validate this idea by core-set selection on our large labeled dataset and by growing it with data from the wild. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Variable Learning Costs <s> For medical image segmentation, most fully convolutional networks (FCNs) need strong supervision through a large sample of high-quality dense segmentations, which is taxing in terms of costs, time and logistics involved. This burden of annotation can be alleviated by exploiting weak inexpensive annotations such as bounding boxes and anatomical landmarks. However, it is very difficult to \textit{a priori} estimate the optimal balance between the number of annotations needed for each supervision type that leads to maximum performance with the least annotation cost. To optimize this cost-performance trade off, we present a budget-based cost-minimization framework in a mixed-supervision setting via dense segmentations, bounding boxes, and landmarks. We propose a linear programming (LP) formulation combined with uncertainty and similarity based ranking strategy to judiciously select samples to be annotated next for optimal performance. In the results section, we show that our proposed method achieves comparable performance to state-of-the-art approaches with significantly reduced cost of annotations. <s> BIB002
When acquiring training data from various types of oracle it is worth considering the relative cost associated with querying a particular oracle type for that annotation. We may wish to acquire more accurate labels from an expert oracle, but this is likely more expensive to obtain than from a non-expert oracle. The trade off, of course, being accuracy of the obtained labelless expertise of the oracle will likely result in a lower quality of annotation. Several methods have been proposed to model this and allow developers to trade off between cost and overall accuracy of acquired annotations. BIB001 propose a cost-sensitive active learning approach for intracranial haemorrhage detection. Since annotation time may vary significantly across examples, they model the annotation time and optimize the return on investment. They show their approach selects a diverse and meaningful set of samples to be annotated, relative to a uniform cost model, which mostly selects samples with massive bleeds which are time consuming to annotate. BIB002 propose a budget based cost minimisation framework in a mixed-supervision setting (strong and weak annotations) via dense segmentation, bounding boxes, and landmarks. Their framework uses an uncertainty and a representativeness ranking strategy to select samples to be annotated next. They demonstrate state-of-the-art performance at a significantly reduced training budget, highlighting the important role of choice of annotation type on the costs of acquiring training data. The above works each show an improved consideration for the economic burden that is incurred when curating training data. A valuable research direction would be to assess the effects of oracle expertise level, annotation type and image annotation cost in a unified framework as these three factors are very much linked and may have a profound influence over each other.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Semi-supervised Learning <s> Abstract Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods that can learn with less/other types of supervision, have been proposed. We give an overview of semi-supervised, multiple instance, and transfer learning in medical imaging, both in diagnosis or segmentation tasks. We also discuss connections between these learning scenarios, and opportunities for future research. A dataset with the details of the surveyed papers is available via https://figshare.com/articles/Database_of_surveyed_literature_in_Not-so-supervised_a_survey_of_semi-supervised_multi-instance_and_transfer_learning_in_medical_image_analysis_/7479416 . <s> BIB001
In the presence of large data-sets, but the absence of labels, unsupervised and semi-supervised approaches offer a means by which information can be extracted without requiring labels for all the data-points. This could potentially have a massive impact on the medical image analysis field where this is often the case. In a semi-supervised learning (SSL) scenario we may have some labelled data, but this is often very limited. We do however have a large set of un-annotated instances (much like in active learning) to draw information from, the goal being to improve a model (trained only on the labelled instances) using the un-labelled instances. From this we derive two distinct goals: a) predicting labels for future data (inductive SSL) and b) predicting labels for the available un-annotated data (transductive SSL) BIB001 . An popular family of SSL approaches employ a technique called self-training where a classifier is trained using only the labelled data, following training, inference is performed on the unlabelled instances. A decision is made about each of the new annotations as to whether they should be included in the training set in the next iteration. One proposed approach to making this decision is the use of an oracle to decide if the label is accurate enough for use during training, guiding this towards an active learning approach. Self-training is popular in many segmentation tasks, but less so for detection and diagnosis applications BIB001 . It has been shown that increasing the number of samples improves performance, but that the advantages of SSL methods decrease as we acquire more labelled data. SSL methods provide a powerful way of extracting useful information from unannotated image data and we believe that progress in this area will be beneficial to AL systems that desire a more accurate model for initialisation to guide data selection strategies.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Generative Adversarial Networks <s> Abstract Generative adversarial networks have gained a lot of attention in the computer vision community due to their capability of data generation without explicitly modelling the probability density function. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into training and imposing higher order consistency. This has proven to be useful in many cases, such as domain adaptation, data augmentation, and image-to-image translation. These properties have attracted researchers in the medical imaging community, and we have seen rapid adoption in many traditional and novel applications, such as image reconstruction, segmentation, detection, classification, and cross-modality synthesis. Based on our observations, this trend will continue and we therefore conducted a review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Generative Adversarial Networks <s> Training robust deep learning (DL) systems for medical image classification or segmentation is challenging due to limited images covering different disease types and severity. We propose an active learning (AL) framework to select most informative samples and add to the training data. We use conditional generative adversarial networks (cGANs) to generate realistic chest xray images with different disease characteristics by conditioning its generation on a real image sample. Informative samples to add to the training set are identified using a Bayesian neural network. Experiments show our proposed AL framework is able to achieve state of the art performance by using about \(35\%\) of the full dataset, thus saving significant time and effort over conventional methods. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Generative Adversarial Networks <s> Image segmentation is an important task in many medical applications. Methods based on convolutional neural networks attain state-of-the-art accuracy; however, they typically rely on supervised training with large labeled datasets. Labeling medical images requires significant expertise and time, and typical hand-tuned approaches for data augmentation fail to capture the complex variations in such images. ::: We present an automated data augmentation method for synthesizing labeled medical images. We demonstrate our method on the task of segmenting magnetic resonance imaging (MRI) brain scans. Our method requires only a single segmented scan, and leverages other unlabeled scans in a semi-supervised approach. We learn a model of transformations from the images, and use the model along with the labeled example to synthesize additional labeled examples. Each transformation is comprised of a spatial deformation field and an intensity change, enabling the synthesis of complex effects such as variations in anatomy and image acquisition procedures. We show that training a supervised segmenter with these new examples provides significant improvements over state-of-the-art methods for one-shot biomedical image segmentation. Our code is available at this https URL. <s> BIB003
Generative Adversarial Network (GAN) based methods have been applied to several areas of medical imaging such as denoising, modality transfer and abnormality detection, but more relevant to AL has been the use of GANs for image synthesis, this offers an alternative (or addition) to the many data augmentation techniques used to expand limited data-sets BIB001 . propose a conditional GAN (cGAN) based method for active learning where they use the discriminator D output as a measure of uncertainty of the proposed segmentations, and use this metric to rank samples from the unlabelled data-set. From this ranking the most uncertain samples are presented to an oracle for segmentation and the least uncertain images are included in the labelled data-set as pseudo ground truth labels. They show their method approaches increasing accuracy as the percentage of interactively annotated samples increasesreaching the performance of fully supervised benchmark methods using only 80% of the labels. This work also motivates the use of GAN discriminator scores as a measure of prediction uncertainty. BIB002 also use a cGAN to generate chest X-Ray images conditioned on a real image, and using a Bayesian neural network to assess the informativeness of each generated sample, decide whether each generated sample should be used as training data. If so, is used to fine-tune the network. They demonstrate that the approach can achieve comparable performance to training on the fully annotated data, using a dataset where only 33% of the pixels in the training set are annotated, offering a huge saving of time, effort and costs for annotators. BIB003 present an alternative method of data synthesis to GANs through the use of learned transformations. From a single manually segmented image, they leverage other un-annotated images in a SSL like approach to learn a transformation model from the images, and use the model along with the labelled data to synthesise additional annotated samples. Transformations consist of spatial deformations and intensity changes to enable to synthesis of complex effects such as anatomical and image acquisition variations. They train a model in a supervised way for the segmentation of MRI brain images and show state-of-the-art improvements over other oneshot bio-medical image segmentation methods. The above works demonstrate the power of using synthetic data conditioned on a very small amount of annotated data to generate new training samples that can be used to train a model to a high accuracy, this is of great value to AL methods where we usually require a initial training set to train a model on before we can employ a data selection policy. These methods also demonstrate the efficient use of labelled data and allow us to generate multiple training samples from a individually annotated image, this may allow the annotated data obtained in AL/Human-in-the-Loop methods to be used more effectively through generating multiple training samples for a single requested annotation, further reducing the annotation effort required to train state-of-the-art models.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Transfer Learning <s> Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Transfer Learning <s> Cardiovascular disease (CVD) is the number one killer in the USA, yet it is largely preventable (World Health Organization 2011). To prevent CVD, carotid intima-media thickness (CIMT) imaging, a noninvasive ultrasonography method, has proven to be clinically valuable in identifying at-risk persons before adverse events. Researchers are developing systems to automate CIMT video interpretation based on deep learning, but such efforts are impeded by the lack of large annotated CIMT video datasets. CIMT video annotation is not only tedious, laborious, and time consuming, but also demanding of costly, specialty-oriented knowledge and skills, which are not easily accessible. To dramatically reduce the cost of CIMT video annotation, this paper makes three main contributions. Our first contribution is a new concept, called Annotation Unit (AU), which simplifies the entire CIMT video annotation process down to six simple mouse clicks. Our second contribution is a new algorithm, called AFT (active fine-tuning), which naturally integrates active learning and transfer learning (fine-tuning) into a single framework. AFT starts directly with a pre-trained convolutional neural network (CNN), focuses on selecting the most informative and representative AU s from the unannotated pool for annotation, and then fine-tunes the CNN by incorporating newly annotated AU s in each iteration to enhance the CNN’s performance gradually. Our third contribution is a systematic evaluation, which shows that, in comparison with the state-of-the-art method (Tajbakhsh et al., IEEE Trans Med Imaging 35(5):1299–1312, 2016), our method can cut the annotation cost by >81% relative to their training from scratch and >50% relative to their random selection. This performance is attributed to the several advantages derived from the advanced active, continuous learning capability of our AFT method. <s> BIB002 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Transfer Learning <s> In recent years, some convolutional neural networks (CNNs) have been proposed to segment sub-cortical brain structures from magnetic resonance images (MRIs). Although these methods provide accurate segmentation, there is a reproducibility issue regarding segmenting MRI volumes from different image domains – e.g., differences in protocol, scanner, and intensity profile. Thus, the network must be retrained from scratch to perform similarly in different imaging domains, limiting the applicability of such methods in clinical settings. In this paper, we employ the transfer learning strategy to solve the domain shift problem. We reduced the number of training images by leveraging the knowledge obtained by a pretrained network, and improved the training speed by reducing the number of trainable parameters of the CNN. We tested our method on two publicly available datasets – MICCAI 2012 and IBSR – and compared them with a commonly used approach: FIRST. Our method showed similar results to those obtained by a fully trained CNN, and our method used a remarkably smaller number of images from the target domain. Moreover, training the network with only one image from MICCAI 2012 and three images from IBSR datasets was sufficient to significantly outperform FIRST with (p < 0.001) and (p < 0.05), respectively. <s> BIB003
Transfer Learning (TL) and domain adaptation are branches of DL that aim to use pre-trained networks as a starting point for new applications. Given a pre-trained network trained for a particular task, it has been shown that this network can be 'fine-tuned' towards a target task from limited training data. BIB001 demonstrated the applicability of TL for a variety of medical image analysis tasks, and show, despite the large differences between natural images and medical images, CNNs pre-trained on natural images and fine-tuned on medical images can perform better than medical CNNs trained from scratch. This performance boost was greater where fewer target task training examples were available. Many of the methods discussed so far start with a network pre-trained on natural image data. propose AFT*, a platform that combines AL and TL to reduce annotation efforts, which aims at solving several problems within AL. AFT* starts with a completely empty labelled data-set, requiring no seed samples. A pre-trained CNN is used to seek 'worthy' samples for annotation and to gradually enhance the CNN via continuous finetuning. A number of steps are taken to minimise the risk of catastrophic forgetting. Their previous work applies a similar but less featureful approach to several medical image analysis tasks to demonstrate equivalent performance can be reached with a heavily reduced training data-set. They then use these tasks to evaluate several patterns of prediction that the network exhibits and how these relate to the choice of AL selection criteria. BIB002 have gone onto to use their AFT framework for annotation of CIMT videos, a clinical technique for characterisation of Cardiovascular disease. Their extension into the video domain presents its own unique challenges and thus they propose a new concept of an Annotation Unit -reducing annotating a CIMT video to just 6 user mouse clicks, and by combining this with their AFT framework reduce annotation cost by 80% relative to training from scratch and by 50% relative to random selection of new samples to be annotated (and used for fine-tuning). BIB003 use TL for supervised domain adaptation for sub-cortical brain structure segmentation with minimal user interaction. They significantly reduce the number of training images from different MRI imaging domains by leveraging a pre-trained network and improve training speed by reducing the number of trainable parameters in the CNN. They show their method achieves similar results to their baseline while using a remarkably small amount of images from the target domain and show that using even one image from the target domain was enough to outperform their baseline. The above methods and more discussed in this review demonstrate the applicability of TL to reducing the number of annotated sample required to train a model on a new task from limited training data. By using pre-trained networks trained on annotated natural image data (there is an abundance) we can boost model performance and further reduce the annotation effort required to achieve state-of-the-art performance.
A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Continual Lifelong Learning and Catastrophic Forgetting <s> This work investigates continual learning of two segmentation tasks in brain MRI with neural networks. To explore in this context the capabilities of current methods for countering catastrophic forgetting of the first task when a new one is learned, we investigate elastic weight consolidation, a recently proposed method based on Fisher information, originally evaluated on reinforcement learning of Atari games. We use it to sequentially learn segmentation of normal brain structures and then segmentation of white matter lesions. Our findings show this recent method reduces catastrophic forgetting, while large room for improvement exists in these challenging settings for continual learning. <s> BIB001 </s> A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis <s> Continual Lifelong Learning and Catastrophic Forgetting <s> Abstract Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational learning systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration. <s> BIB002
In many of scenarios described in this review, models continuously receive new annotations to be used for training, and in theory we could continue to retrain or fine-tune a model indefinitely, but is this practical and cost effective? It is important to quantify the long term effects of training a model with new data to assess how the model changes over time and whether or not performance has improved, or worse, declined. Learning from continuous streams of data has proven more difficult than anticipated, often resulting in 'catastrophic forgetting' or 'interference BIB002 . We face the stability-plasticitydilemma. Avoiding catastrophic forgetting in neural networks when learning from continuous streams of data can be broadly divided among three conceptual strategies: a) Retraining the the whole network while regularising (to prevent forgetting of previously learned tasks). b) selectively train the network and expand it if needed to represent new tasks, and c) retaining previous experience to use memory replay to learn in the absence of new input. We refer the reader to BIB002 for a more detailed overview of these approaches. BIB001 investigate continual learning of two MRI segmentation tasks with neural networks for countering catastrophic forgetting of the first task when a new one is learned. They investigate elastic weight consolidation, a method based on Fisher information to sequentially learn segmentation of normal brain structures and then segmentation of white matter lesions and demonstrate this method reduces catastrophic forgetting, but acknowledge there is a large room for improvement for the challenging setting of continual learning. It is important to quantify the performance and robustness of a model at every stage of its lifespan. One way to consider stopping could evaluate when the cost of continued training outweighs the cost of errors made by the current model. An existing measure that attempts to quantify the economical value of medical intervention is the Quality-adjusted Life year (QALY), where one QALY equates to one year of healthy life NICE (2013) . Could this metric be incorporated into models? At present we cannot quantify the cost of errors made by DL medical imaging applications but doing so could lead to a deeper understanding of how accurate a DL model really ought to be. As models are trained on more of the end users own data, will this cause the network to perform better on data from that users system despite performing worse on data the model was initially trained on? Catastrophic forgetting suggests this will be the case, but is this a bad thing? It may be beneficial for models to gradually bias themselves towards high performance for the end users own data, even if this results in the model becoming less transferable to other data.
Novel Testing Tools for a Cloud Computing Environment- A Review <s> INTRODUCTION <s> This paper provides a state-of-the-art review of cloud testing. Cloud computing, a new paradigm for developing and delivering computing applications and services, has gained considerable attention in recent years. Cloud computing can impact all software life cycle stages, including the area of software testing. TaaS (Testing as a Service) or cloud testing, which includes testing the cloud and testing using the cloud, is a fast developing area of research in software engineering. The paper addresses the following three areas: (1) general research in cloud testing, (2) specific cloud testing research, i.e., tools, methods, and systems under test, and (3) commercial TaaS tools and solutions.. <s> BIB001 </s> Novel Testing Tools for a Cloud Computing Environment- A Review <s> INTRODUCTION <s> Cloud computing is discipline which use everything as service that provide economic, convenient and on-demand services to requested end users and cloud service consumer. Building a cloud computing network is not an easy task. It requires lots of efforts and time. For this, there arises a concept called Cloud Engineering. Cloud engineering is a discipline that uses set of processes which help to engineer a cloud network. The structure and principles of cloud engineering plays an important role in the engineering of usable, economic and vibrant cloud. The cloud engineering use a cloud development life cycle (CDLC) which systematic developed cloud. Quality assurance and verification is an important and mandatory part of development cycle. Quality assurance ensures the quality and web service of cloud network. Cloud Verification is an irrespirable step in a development of an economic cloud computing solution of a network. Verify the performance, reliability, availability, elasticity and security of cloud network against the service level agreement with respect to specification, agreement and requirement. The work in this paper focuses on the Quality Assurance factors and parameters that influence quality. It also discuses quality of data used in a cloud. This paper proposes and explores the structure and its component used in verification process of a cloud. <s> BIB002
LOUD computing is a model for convenient and ondemand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management efforts [Prince Jain, 2012; Prince BIB002 . The aim of the cloud computing is to provide scalable and inexpensive ondemand computing infrastructures with good quality of service levels [Prince . Cloud testing is a form of testing in which web applications uses cloud computing environment and infrastructure to simulate real world user traffic by using cloud technologies and solutions. Cloud testing basically aligns with concept of cloud and SaaS. Cloud testing provides with the ability to test cloud by using the cloud infrastructure such as hardware and bandwidth that more closely simulate real world conditions and parameters. In simple words, Testing a Cloud refers to the verification and validation of applications, environments and infrastructure that are available on demand by conforming these to the expectations of the cloud computing business model [Prince Jain, 2012; Prince BIB002 . Cloud Testing is defined as testing as a Service (TaaS). TaaS is considered as a new business and service model, in which a provider undertakes software testing activities of a given application in a cloud infrastructure for customers. TaaS can be used to validation of various products owned by organizations that deal with testing products and services which are making use of a cloud based licensing model for their clients. To build an economic, efficient and scalable cloud computing network, a good testing tool is needed. The number of tests cases for a large scale cloud computing system can range from several hundred to many thousands which requiring significant computing resources, infrastructure and lengthy execution times [Vinaya Kumar Mylavarapu, 2011; Sergiy BIB001 . Software Testing tools that are basically used for testing of conventional applications are of little use when it is applied to cloud computing. A traditional software testing approach to test cloud includes high cost to simulate user activity from different locations. Moreover, testing firewalls and load balancers involves expenditure on hardware, software and its maintenance [Sergiy BIB001 . Traditional approaches to reduce the execution time by excluding selected tests from the suite [Neha . To test the cloud based software systems, techniques and tools are necessary to address quality aspect of the cloud infrastructure such as massive scalability and dynamic configuration. The tools can also be built on the cloud platform to benefit from virtualized platform and services, massive resources, and parallelized execution. Major These tools are discussed and explored in the following sections. Issues and characteristics of traditional testing tool used for cloud computing in discussed in Section 2.
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> In this review paper, we make a detailed study of a class of directed graphs, known as tournaments. The reason they are called tournaments is that they represent the structure of round robin tournaments, in which players or teams engage in a game that cannot end in a tie and in which every player plays each other exactly once. Although tournaments are quite restricted structurally, they are realized by a great many empirical phenomena in addition to round robin competitions. For example, it is known that many species of birds and mammals develop dominance relations so that for every pair of individuals, one dominates the other. Thus, the digraph of the "pecking structure" of a flock of hens is asymmetric and complete, and hence a tournament. Still another realization of tournaments arises in the method of scaling, known as "paired comparisons." Suppose, for example, that one wants to know the structure of a person's preferences among a collection of competing brands of a product. He can be asked to indicate for each pair of brands which one he prefers. If he is not allowed to indicate indifference, the structure of his stated preferences can be represented by a tournament. Tournaments appear similarly in the theory of committees and elections. Suppose that a committee is considering four alternative policies. It has been argued that the best decision will be reached by a series of votes in which each policy is paired against each other. The outcome of these votes can be represented by a digraph whose points are policies and whose lines indicate that one policy defeated the other. Such a digraph is clearly a tournament. After giving some essential definitions, we develop properties that all tournaments display. We then turn our attention to transitive tournaments, namely those that are complete orders. It is well known that not all preference structures are transitive. There is considerable interest, therefore, in knowing how transitive any given tournament is. Such an index is presented toward the end of the second section. In the final section, we consider some properties of strongly connected tournaments. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> The main subjects of this survey paper are Hamitonian cycles, cycles of prescirbed lengths, cycles in tournaments, and partitions, packings, and coverings by cycles. Several unsolved problems and a bibiligraphy are included. <s> BIB002 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> Abstract We describe a polynomial algorithm, which either finds a Hamiltonian path with prescribed initial and terminal vertices in a tournament (in fact, in any semicomplete digraph), or decides that no such path exists. <s> BIB003 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> This paper presents polynomially bounded algorithms for finding a cycle through any two prescribed arcs in a semicomplete digraph and for finding a cycle through any two prescribed vertices in a complete k-partite oriented graph. It is also shown that the problem of finding a maximum transitive subtournament of a tournament and the problem of finding a cycle through a prescribed arc set in a tournament are both NP-complete. <s> BIB004 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> A digraph obtained by replacing each edge of a complete m-partite graph with an arc or a pair of mutually opposite arcs with the same end vertices is called a complete m-partite digraph. An $O ( n^3 )$ algorithm for finding a longest path in a complete m-partite $( m \geq 2 )$ digraph with n vertices is described in this paper. The algorithm requires time $O( n^{2.5} )$ in case of testing only the existence of a Hamiltonian path and finding it if one exists. It is simpler than the algorithm of Manoussakis and Tuza [SIAM J. Discrete Math., 3 (1990), pp. 537–543], which works only for $m = 2$. The algorithm implies a simple characterization of complete m-partite digraphs having Hamiltonian paths that was obtained for the first time in Gutin [Kibernetica (Kiev), 4 (1985), pp. 124–125] for $m = 2$ and in Gutin [Kibernetica (Kiev), 1(1988), pp. 107–108] for $ m \geq 2 $. <s> BIB005 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> Abstract A directed graph is called ( m , k )-transitive if for every directed path x 0 x 1 … x m there is a directed path y 0 y 1 … y k such that x 0 = y 0 , x m = y k , and { y i |0⩽ i ⩽ k } ⊂{ x i |0⩽ i ⩽ m }. We describe the structure of those ( m , 1)-transitive and (3,2)-transitive directed graphs in which each pair of vertices is adjacent by an arc in at least one direction, and present an algorithm with running time O( n 2 ) that tests ( m, k )-transitivity in such graphs on n vertices for every m and k =1, and for m =3 and k =2. <s> BIB006 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Introduction <s> Abstract A digraph obtained by replacing each edge of a complete multipartite graph by an arc or a pair of mutually opposite arcs with the same end vertices is called a complete multipartite graph. Such a digraph D is called ordinary if for any pair X , Y of its partite sets the set of arcs with both end vertices in X ∪ Y coincides with X × Y = {( x , y ): xϵX , yϵY } or Y × X or X × Y ∪ Y × X . We characterize all the pancyclic and vertex pancyclic ordinary complete multipartite graphs. Our charcterizations admit polynomial time algorithms. <s> BIB007
A digraph obtained by replacing each edge of a (simple) graph G by an arc (by an arc or a pair of mutually opposite arcs) with the same end vertices is called an orientation (biorientation, respectively) of G. Therefore, orientations of graphs have no opposite arcs, and biorientations may have. The investigation of paths and cycles in tournaments, orientations of complete graphs, was initiated by Redei's Theorem [63] derived in 1934: each tournament contains a Hamiltonian path. In 1959, P. Camion obtained necessary and sufficient conditions for the existence of a Hamiltonian cycle in a tournament. He proved that every strongly connected tournament has a Hamiltonian cycle. There are several survey articles BIB002 BIB001 (the second one contains results on general digraphs too) and a book by J. Moon where the properties of tournaments are considered. J. Moon and J. A. Bondy were the first to consider cycles in the entire class of multipartite tournaments (orientations of complete multipartite graphs). Since the 80s, mathematicians began studying extensively cycles and paths in bipartite tournaments. The first results were described in the survey by L. W. Beineke . In this period a number of results on the cycle and path structure of m-partite tournaments for m ≥ 3 were obtained. A survey describing these results as well as recent results on cycles and paths in bipartite tournaments is absent and seems to be needed. The aim of the present article is to fill in this gap and also to describe some theorems and algorithms on paths and cycles in tournaments which have been obtained recently. Note that part of the results given in the paper are formulated not for orientations of complete multipartite graphs (usually called multipartite tournaments) but for biorientations of them (called semicomplete multipartite digraphs). In particular, we give some theorems for semicomplete digraphs (biorientations of complete graphs) instead of the more restricted class of tournaments. The motivation for considering semicomplete multipartite digraphs rather than multipartite tournaments is the following. From a theoretical point of view there is no good reason to restrict investigation to digraphs having no opposite arcs when more general results may be available. Digraphs with opposite arcs are sometimes used in order to obtain results for digraphs without opposite arcs (see ). Moreover, total exclusion of opposite arcs from the consideration does not allow to study adequately some practical digraph models (models in social choice theory, interconnections networks, etc. BIB006 ). That is why there are numerous papers where properties of semicomplete digraphs, semicomplete bipartite and m-partite (m ≥ 3) digraphs (see, for example, BIB003 BIB004 BIB005 BIB007 BIB006 ) were investigated. We hope that this survey will be as successful as in stimulating further research on the subject.
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> Abstract We characterize weakly Hamiltonian-connected tournaments and weakly panconnected tournaments completely and we apply these results to cycles and bypasses in tournaments with given irregularity, in particular, in regular and almost regular tournaments. We give a sufficient condition in terms of local and global connectivity for a Hamiltonian path with prescribed initial and terminal vertex. From this result we deduce that every 4-connected tournament is strongly Hamiltonian-connected and that every edge of a 3-connected tournament is contained in a Hamiltonian cycle of the tournament and we describe infinite families of tournaments demonstrating that these results are best possible. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> This clearly written , mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NPcomplete problems, more. All chapters are supplemented by thoughtprovoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering. Mathematicians wishing a self-contained introduction need look no further.—American Mathematical Monthly. 1982 ed. <s> BIB002 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> Abstract Let G be a directed graph whose edges are coloured with two colours. Call a set S of vertices of G independent if no two vertices of S are connected by a monochromatic directed path. We prove that if G contains no monochromatic infinite outward path, then there is an independent set S of vertices of G such that, for every vertex x not in S , there is a monochromatic directed path from x to a vertex of S . In the event that G is infinite, the proof uses Zorn's lemma. The last part of the paper is concerned with the case when G is a tournament. <s> BIB003 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> A graph is constructed to provide a negative answer to the following question of Bondy: Does every diconnected orientation of a complete k-partite (k ≥ 5) graph with each part of size at least 2 yield a directed (k + 1)-cycle? <s> BIB004 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> In this paper, the following results will be shown: 1 There is a Hamilton path and a cycle of length at least p —1 in any regular multipartite tournament of order p; (i) There is a longest path U O ,…, u t in any oriented graph such that d − (u O ) + d + (u t ) ≤ t. <s> BIB005 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> We give necessary and sufficient conditions in terms of connectivity and factors for the existence of hamiltonian cycles and hamiltonian paths and also give sufficient conditions in terms of connectivity for the existence of cycles through any two vertices in bipartite tournaments. <s> BIB006 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> Efficient algorithms for finding Hamiltonian cycles, Hamiltonian paths, and cycles through two given vertices in bipartite tournaments are given. <s> BIB007 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> Abstract We prove that every k -partite tournament with at most one vertex of in-degree zero contains a vertex from which each other vertex can be reached in at most four steps. <s> BIB008 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> Abstract An n -partite tournament, n ≥2, or multipartite tournament is an oriented graph obtained by orienting each edge of a complete n -partite graph. The cycle structure of multipartite tournaments is investigated and properties of vertices with maximum score are studied. <s> BIB009 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> This paper presents polynomially bounded algorithms for finding a cycle through any two prescribed arcs in a semicomplete digraph and for finding a cycle through any two prescribed vertices in a complete k-partite oriented graph. It is also shown that the problem of finding a maximum transitive subtournament of a tournament and the problem of finding a cycle through a prescribed arc set in a tournament are both NP-complete. <s> BIB010 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> A digraph obtained by replacing each edge of a complete m-partite graph with an arc or a pair of mutually opposite arcs with the same end vertices is called a complete m-partite digraph. An $O ( n^3 )$ algorithm for finding a longest path in a complete m-partite $( m \geq 2 )$ digraph with n vertices is described in this paper. The algorithm requires time $O( n^{2.5} )$ in case of testing only the existence of a Hamiltonian path and finding it if one exists. It is simpler than the algorithm of Manoussakis and Tuza [SIAM J. Discrete Math., 3 (1990), pp. 537–543], which works only for $m = 2$. The algorithm implies a simple characterization of complete m-partite digraphs having Hamiltonian paths that was obtained for the first time in Gutin [Kibernetica (Kiev), 4 (1985), pp. 124–125] for $m = 2$ and in Gutin [Kibernetica (Kiev), 1(1988), pp. 107–108] for $ m \geq 2 $. <s> BIB011 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> For a graph G, letG?(G?) denote an orientation ofG having maximum (minimum respectively) finite diameter. We show that the length of the longest path in any 2-edge connected (undirected) graph G is precisely diam(G?). LetK(m l ,m 2,...,m n) be the completen-partite graph with parts of cardinalitiesm 1 m2, ?,m n . We prove that ifm 1 = m2 = ? =m n = m,n ? 3, then diam(K?(m1,m2,...,mn)) = 2, unless m=1 andn = 4. <s> BIB012 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Paths in SMDs <s> An n-partite tournament is an orientation of a complete n-partite graph. We show that if D is a strongly connected n-partite (n ≥ 3) tournament, then every partite set of D has at least one vertex, which lies on an m-cycle for all m in {3, 4,..., n}. This result extends those of Bondy (J. London Math. Soc.14 (1976), 277-282) and Gutin (J. Combin. Theory Ser. B58 (1993), 319-321). <s> BIB013
The following result is proved in BIB011 . Theorem 3.1 Let D be a SMD. Then 1) for any almost 1-diregular subgraph F of D there is a path P of D satisfying V (P ) = V (F ) (if F is a maximum almost 1-diregular subgraph each such path is a longest path of D); 2) there exists an O(n 3 ) algorithm for finding a longest path in D. The first half of Theorem 3.1 follows from the following: Lemma 3.2 Let D be a SMD and P , C a path and a cycle having no common vertices; then the subgraph of D induced by V (P ) ∪ V (C) contains a Hamiltonian path. In order to describe the algorithm mentioned in Theorem 3.1 we first consider a construction by N. Alon (cf. BIB011 ) which allows one to find efficiently a 1-diregular subgraph with maximum order of a given digraph D. Let B = B(D) be a bipartite weighted graph, such that (X, X ) is the partition of B, where X = V (D), X = {x : x ∈ X}; xy ∈ E(B), if and only if either (x, y) ∈ A(D) or x = y . The weight of an edge xy of B equals 1 if x = y and equals 2, otherwise. It is easy to see that solving the assignment problem for B (in time O(n 3 ), cf. BIB002 ) and, then, removing all the edges with weight 2 from the solution, we obtain a set of edges of B corresponding to some 1-diregular subgraph F of D of maximum order. For a cycle C and a vertex x on it, denote by C x the path obtained from C by deleting the arc ending at x. Now we are ready to describe the algorithm. Step 1. Construct the digraph D with be the cycles of F , and suppose x ∈ V (C 0 ) (it is easy to see that x ∈ F ). Find P = C 0 − x, and put Note that F is almost a 1-diregular subgraph of D of maximum order. We shall construct a path on all the vertices of F -this will clearly be a longest path. Step 2. If t = 0, then H := P , and we have finished. Otherwise put C := C t , t := t − 1. Let P = (x 1 , x 2 , ..., x m ), C = (y 1 , y 2 , ..., y k , y 1 ). Step , where z + is the vertex following z in C, and go back to Step 2. Analogously, if there exists y ∈ Γ + (x m ) ∩ V (C) put P := (P, C y ), and go back to Step 2. Step 4. For i = 1, 2, ..., m − 1; j = 1, 2, ..., k if (y j , x i+1 ), (x i , y j+1 ) ∈ A(D), then let P be the path containing the fragment of P from x 1 to x i , the path C y j+1 , and the fragment of P from x i+1 to x m . Go to Step 2. If none of Steps 2,3,4 can be applied, we go to Step 5 below . Step 5. then let P be the path containing the fragment of P from x 1 to x i−1 , the vertices y j+1 , x i , the fragment of C from y j+2 to y j , and the fragment of P from x i+1 to x m in the given order (the direct proof of Lemma 3.2 BIB011 consists of showing the existence of the above mentioned i, j = j(i) as well as the arcs (x i−1 , y j+1 ), (x i , y j+2 )). Go to Step 2. Lemma 3.2 can also be proved as a rather simple consequence of a sufficient condition for a SMD to be Hamiltonian, shown in (see Theorem 4.8 bellow). This proof of Lemma 3.2 provides a more complicated algorithm than Algorithm 3.3. Step 1 of Algorithm 3.3 can be executed in time O(n 3 ). All the other steps can be performed in time O(n 2 ). Using the maximum matching algorithm for bipartite graphs , one can test whether a digraph D contains a 1-diregular spanning subgraph F and find some F in case one exists in time O(n 2.5 / √ log n). This implies Corollary 3.4 was derived in as a generalization of the same theorem obtained for semicomplete bipartite digraphs in . Using a different approach, R. Häggkvist and Y. Manoussakis gave in BIB006 analogous characterization of bipartite tournaments having a Hamiltonian path. Y. Manoussakis and Z. Tuza constructed in BIB007 an O(n 2.5 / √ log n) algorithm for finding a Hamiltonian path in a bipartite tournament B (if B has a Hamiltonian path). Corollary 3.4 implies that any almost diregular or diregular multipartite tournament has a Hamiltonian path. This last result was also proved in BIB005 as a corollary of Theorem 4.13 (see below). Recently J. Bang-Jensen proved that Corollary 3.4 is also valid for arc-local tournament digraphs. The problem of deciding whether a tournament with two given vertices x and y, contains a Hamiltonian path with endvertices x, y (the order not specified) was solved by C. Thomassen BIB001 . It follows from his characterization that the existence of such a path for specified vertices An analogous characterization of all bipartite tournaments that have a Hamiltonian path between two prescribed vertices x, y was derived by J. Bang-Jensen and Y. Manoussakis in . The only difference between these two characterizations is in Condition 4: in BangJensen's and Manoussakis' theorem the set of forbidden digraphs is absolutely different from that of Theorem 3.5 and moreover infinite (see ). Both characterizations imply polynomial algorithms to decide the existence of a Hamiltonian path connecting two given vertices and find one (if it exists). In BIB001 C. Thomassen considered not only the problem of deciding if for a pair x, y of vertices there is a Hamiltonian path either from x to y or from y to x but also the stronger problem of deciding if there exists a Hamiltonian (x, y)-path. He proved that for every pair x, y of vertices of a 4-strongly connected tournament there is a Hamiltonian path starting at x and ending at y. In , the following conjecture was formulated. Conjecture 3.6 Let D be a 4-strongly connected ordinary MT (or bipartite tournament). The digraph D has a Hamiltonian path from x to y for any pair of vertices x, y of D if and only if D contains an (x, y)-path P such that D − P has a factor. The radius and diameter are important invariants of a digraph. H. Landau observed that the radius of any tournament is at most two and each vertex of maximum outdegree in it is a center. Obviously, any MT containing at least two vertices of indegree zero has an infinite radius. However, in case there are no such two vertices, the radius can be bounded, as shown in the following statement, proved in and, independently, in BIB008 . Theorem 3.7 Any MT with at most one vertex of indegree zero has radius r ≤ 4. B. Sands, N. Sauer and R. Woodrow BIB003 studied monochromatic paths in arc-coloured digraphs. In particular, they proved that every tournament whose arcs are coloured with two colours contains a vertex v such that for every other vertex w there exists a monochromatic (v, w)-path. They also showed the following: Theorem 3.8 Let T be a tournament whose arcs are coloured with three colours, and whose vertices can be partitioned into disjoint blocks such that (i) two vertices in different blocks are always connected by a red arc; (ii) two vertices in the same block are always connected by a blue or a green arc. Then there is a vertex v of T such that for every other vertex x of T there is a monochromatic path from v to x. It is easy to see that the last theorem follows from Theorem 3.7 and the first mentioned result of B. Sands, N. Sauer and R. Woodrow. It is easy to check that Theorem 3.7 holds for the entire class of SMD. V. Petrovic and C. Thomassen BIB008 pointed out that Theorem 3.7 can be extended to a larger class of oriented graphs (at the cost of modifying the constant 4). Theorem 3.9 Let G be a graph whose complement is a disjoint union of complete graphs, cycles and paths. Then every orientation of G with at most one vertex of indegree zero has radius at most 6. Unlike tournaments, a vertex of maximum outdegree in a MT is not necessarily a center as proved in BIB009 : Theorem 3.10 Let T be a strongly connected 3-partite tournament of order n ≥ 8. If v is a vertex of maximum outdegree in T , then ecc(v) is at most [n/2] and this bound is best possible. In the case of bipartite tournaments, it is possible to obtain more detailed results. In characterizations of vertices with eccentricity 1, 2, 3 or 4 were derived. Using these characterizations all bipartite tournaments with radius 1, 2, 3 or 4 were characterized. It is easy to see BIB012 , that if a graph G has an orientation with a finite diameter (i. e., if G has no bridges ), then the maximum diameter of such an orientation is equal to the length of the longest path in G. The problem of finding the minimum possible diameter of such an orientation is significantly more complicated. Denote by f (m 1 , m 2 , ..., m k ) the minimum possible diameter of a k-partite tournament with partite sets of sizes m 1 , m 2 , ..., m k . L. Soltes obtained the following result. , and otherwise A shorter proof of this result using the well known theorem of Sperner (cf. [1] ) is given in . In BIB012 , the following result dealing with k ≥ 3 was proved. 4 Cycles in semicomplete multipartite digraphs J. A. Bondy extended the above mentioned Moser's Theorem on k-partite (k ≥ 3) tournaments in the following form (this result was obtained independently in as well). In connection with the last statement he asked if the inequality m > k may be replaced by the equality m = k + 1. A negative answer to this question was obtained in (for details see ). The same counter-example (as in ) was found independently by R. Balakrishnan and P. Paulraja BIB004 . Consider the k-partite (k ≥ 3) tournament D k with the partite sets {x It is easy to see that D k (k ≥ 3) has no (k + 1)-cycle. In it was also proved that the inequality m > k (in Theorem 4.2 ) may be replaced by the inequality k + 1 ≤ m ≤ k + 2. In connection with Theorem 4.1 J. A. Bondy raised the question if some form of the corresponding generalization of Moon's Theorem is also true. He further gave an example showing that the last generalization is not true in general. In , and BIB013 the following three restricted generalizations of Moon's theorem were obtained. Note that the last theorem implies the previous one. W. Goddard, G. Kubicki, O. Oellermann and S. Tian BIB009 proved that every vertex of a strongly connected k-partite tournament T (k ≥ 3) belongs to a 3-cycle or a 4-cycle of T . Moreover, they obtained the following: In BIB007 and BIB010 , the problem of the existence of a cycle containing prescribed vertices in MTs is studied. In BIB010 , the following result is shown. J. Bang-Jensen, G. Gutin and J. Huang study the Hamiltonian cycle problem for SMDs. To describe the main result of , we need the following definitions. Let C and Z be two disjoint cycles in a digraph D. A vertex x ∈ C is called out-singular (in-singular)) with respect to such that C i has singular vertices with respect to C j and they are all out-singular, and C j contains singular vertices with respect to C i and they are all in-singular. The main result of is the following sufficient condition for a SMD to be Hamiltonian. The following lemma is used in the proof of Theorem 4.8 in . It is useful in other proofs as well (see Theorems 5.5, 5.6). Lemma 4.9 Let F = C 1 ∪ C 2 ∪ · · · ∪ C t be a 1-diregular subgraph of maximum cardinality of a strongly connected SMD D , where C i is a cycle in D (1 ≤ i ≤ t). Let, also, F satisfy the following condition: for every pair 1 ≤ i < j ≤ t all arcs between C i and C j are oriented either from C i to C j or from C j to C i . Then D has a (longest) cycle of length |V (F )| and one can find such a cycle in time O(n 2 ) for a given subgraph F . In view of Theorem 4.8 the following statement seems to be true.
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Conjecture 4.10 <s> We obtain several sufficient conditions on the degrees of an oriented graph for the existence of long paths and cycles. As corollaries of our results we deduce that a regular tournament contains an edge-disjoint Hamilton cycle and path, and that a regular bipartite tournament is hamiltonian. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Conjecture 4.10 <s> In this paper, the following results will be shown: 1 There is a Hamilton path and a cycle of length at least p —1 in any regular multipartite tournament of order p; (i) There is a longest path U O ,…, u t in any oriented graph such that d − (u O ) + d + (u t ) ≤ t. <s> BIB002
There is a polynomial algorithm for the Hamiltonian cycle problem in SMDs. Theorem 4.8 provides a short proof of Lemma 3.2 and hence of Theorem 3.1, Theorems 5.3, 5.7 as well as of the following result originally obtained in . One of the interesting classes of MT is the set of diregular k-partite tournaments (k ≥ 2). Theorem 5.3 implies that every diregular bipartite tournament is Hamiltonian. This result was first obtained in BIB001 , . Moreover, C.-Q. Zhang BIB002 proved the following: Theorem 4.13 There is a cycle of length at least n − 1 in any diregular MT of order n. C.-Q. Zhang BIB002 formulated the following conjecture.
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> The value of depth-first search or “backtracking” as a technique for solving problems is illustrated by two examples. An improved version of an algorithm for finding the strongly connected components of a directed graph and at algorithm for finding the biconnected components of an undirect graph are presented. The space and time requirements of both algorithms are bounded by $k_1 V + k_2 E + k_3 $ for some constants $k_1 ,k_2 $, and $k_3 $, where V is the number of vertices and E is the number of edges of the graph being examined. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> Abstract A tournament is a directed graph which contains exactly one arc joining each pair of vertices. We show that the number of tournaments on n ⩾ 4 vertices which contain exactly one Hamiltonian circuit equals F 2 n −6 , the (2 n − 6)-th Fibonacci number. <s> BIB002 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> We obtain several sufficient conditions on the degrees of an oriented graph for the existence of long paths and cycles. As corollaries of our results we deduce that a regular tournament contains an edge-disjoint Hamilton cycle and path, and that a regular bipartite tournament is hamiltonian. <s> BIB003 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> We give necessary and sufficient conditions in terms of connectivity and factors for the existence of hamiltonian cycles and hamiltonian paths and also give sufficient conditions in terms of connectivity for the existence of cycles through any two vertices in bipartite tournaments. <s> BIB004 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> Efficient algorithms for finding Hamiltonian cycles, Hamiltonian paths, and cycles through two given vertices in bipartite tournaments are given. <s> BIB005 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> Abstract We give a simple algorithm to transform a Hamiltonian path in a Hamiltonian cycle, if one exists, in a tournament T of order n. Our algorithm is linear in the number of arcs, i.e., of complexity O(m)=O(n2) and when combined with the O(n log n) algorithm of [2] to find a Hamiltonian path in T, it yields an O(n2) algorithm for searching a Hamiltonian cycle in a tournament. Up to now, algorithms for searching Hamiltonian cycles in tournaments were of order O(n3) [3], or O(n2 log n) [5]. <s> BIB006 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> Abstract A digraph D is said to satisfy the condition O( n ) if d T + ( u ) + d T − ( v ) ⩾ n whenever uv is not an arc of D . In this paper we prove the following results: If a p × q bipartite tournament T is strong and satisfies O( n ), then T contains a cycle of length at least min(2 n + 2, 2 p , 2 q , unless T is isomorphic to a specified family of graphs. As an immediate consequence of this result we conclude that each arc of a n × n bipartite tournament satisfying O( n ) is contained in cycles of lengths 4, 6, …, 2 n , except in a described case. <s> BIB007 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles in SBDs and ordinary SMDs <s> We give anO(log4n)-timeO(n2)-processor CRCW PRAM algorithm to find a hamiltonian cycle in a strong semicomplete bipartite digraph,B, provided that a factor ofB (i.e., a collection of vertex disjoint cycles covering the vertex set ofB) is computed in a preprocessing step. The factor is found (if it exists) using a bipartite matching algorithm, hence placing the whole algorithm in the class Random-NC. <s> BIB008
In the survey by L. W. Beineke , the following sufficient conditions for a bipartite tournament to have a cycle of length at least 2r due to B. Jackson BIB003 are given: Theorem 5.1 Let T be a strongly connected BT with the property that for all vertices v and w, either (v, w) ∈ A(T ) or d Then T has a cycle of length at least 2r. This supplies a sufficient condition for a BT to be Hamiltonian by taking r = n/2. J. Z. Wang BIB007 showed the following result improving Theorem 5.1 in the Hamiltonian case. ) when m is odd or B( The following necessary and sufficient conditions for the existence of a Hamiltonian cycle in a semicomplete bipartite digraph have appeared in BIB004 BIB005 . Lemma 5.4 can be proved in a rather simple way using Theorem 4.8 . An algorithm for checking whether a SBD D contains a Hamiltonian cycle and finding one if D is Hamiltonian consists of the following steps. 1) Check whether D is strongly connected applying any O(n 2 )-time algorithm (e. g. the one in BIB001 ). If D is not strongly connected, then D is not Hamiltonian. 2) Find in D a maximum 1-diregular subgraph F (apply the construction described before Algorithm 3.3). If F is not a 1-difactor of D, then D is not Hamiltonian. 3) Construct a semicomplete digraph T = T (F ) as follows. The vertices of T are the cycles of F . A cycle C 1 of F dominates another cycle C 2 in T if and only if there is an arc in D from C 1 to C 2 . Find a Hamiltonian cycle H in T (F ) using the algorithm from BIB006 . 4) Transform H into a Hamiltonian cycle of D using Lemma 5.4. J. Bang-Jensen proved that Theorem 5.3 remains true for arc-local tournament digraphs. Recently, J. Bang-Jensen, M. El Haddad, Y. Manoussakis and T. Przytycka BIB008 obtained a random parallel algorithm for checking whether a SBD D has a Hamiltonian cycle and finding one (if there is) in time O(log 4 n) using a CRCW PRAM with O(n 2 ) processors (see, e. g., for the definition of CRCW PRAM). It follows from Theorem 4.11 that the first part of Theorem 5.3 cannot be extended to the entire set of semicomplete t-partite digraphs (t ≥ 3). M. Manoussakis and Y. Manoussakis determined the number of non-isomorphic BTs with 2m vertices containing a unique Hamiltonian cycle. Let h m be the number of such BTs. It is shown in that h 2 = h 3 = 1 and h m = 4h m−1 + h m−2 for m ≥ 4. R. J. Douglas gave a structural characterization of tournaments having a unique Hamiltonian cycle. This result implies a formula for the number s n of non-isomorphic tournaments of order n with a unique Hamiltonian cycle. This characterization as well as formula are rather complicated. M. R. Garey BIB002 later showed that s n could be expressed as a Fibonacci number (s n = f 2n−6 ); his derivation was based on Douglas's characterization. J. W. Moon obtained a direct proof of Garey's formula that is essentially independent of Douglas's characterization. We make the following trivial but useful observation. The length of a longest cycle in any digraph is equal to the maximum length of a longest cycle in its strongly connected components. Hence, solving the longest cycle problem, one may consider only strongly connected digraphs. In the following result which gives a complete solution of the longest cycle problem in the case SBDs was obtained. It easy to see that Theorem 5.5 follows from Lemmas 4.9, 5.4 and the construction described before Algorithm 3.3. The algorithm mentioned in Theorem 5.5 is just a modification of that described after Lemma 5.4. Theorem 5.3 as well as Theorem 5.5 can be proved for the class of ordinary semicomplete t-partite digraphs (t ≥ 3) with a little alteration. Indeed, the following two claims hold . Let T = T (x 1 , ..., x n ) be a semicomplete digraph with V (T ) = {x 1 , ..., x n }, and let k i be non-negative integers (1 ≤ i ≤ n). A closed (k 1 , k 2 , , ..., k n )-walk of T is a closed directed walk of T visiting each vertex x j no more than k j times (the first and the last vertices of a closed walk coincide and are considered as a single vertex).
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> In this review paper, we make a detailed study of a class of directed graphs, known as tournaments. The reason they are called tournaments is that they represent the structure of round robin tournaments, in which players or teams engage in a game that cannot end in a tie and in which every player plays each other exactly once. Although tournaments are quite restricted structurally, they are realized by a great many empirical phenomena in addition to round robin competitions. For example, it is known that many species of birds and mammals develop dominance relations so that for every pair of individuals, one dominates the other. Thus, the digraph of the "pecking structure" of a flock of hens is asymmetric and complete, and hence a tournament. Still another realization of tournaments arises in the method of scaling, known as "paired comparisons." Suppose, for example, that one wants to know the structure of a person's preferences among a collection of competing brands of a product. He can be asked to indicate for each pair of brands which one he prefers. If he is not allowed to indicate indifference, the structure of his stated preferences can be represented by a tournament. Tournaments appear similarly in the theory of committees and elections. Suppose that a committee is considering four alternative policies. It has been argued that the best decision will be reached by a series of votes in which each policy is paired against each other. The outcome of these votes can be represented by a digraph whose points are policies and whose lines indicate that one policy defeated the other. Such a digraph is clearly a tournament. After giving some essential definitions, we develop properties that all tournaments display. We then turn our attention to transitive tournaments, namely those that are complete orders. It is well known that not all preference structures are transitive. There is considerable interest, therefore, in knowing how transitive any given tournament is. Such an index is presented toward the end of the second section. In the final section, we consider some properties of strongly connected tournaments. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> We obtain several sufficient conditions on the degrees of an oriented graph for the existence of long paths and cycles. As corollaries of our results we deduce that a regular tournament contains an edge-disjoint Hamilton cycle and path, and that a regular bipartite tournament is hamiltonian. <s> BIB002 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> Abstract It is shown that if an oriented complete bipartite graph has a directed cycle of length 2 n , then it has directed cycles of all smaller even lengths unless n is even and the 2 n -cycle induces one special digraph. <s> BIB003 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> An activated gas reaction apparatus which comprises an activation chamber; feeding means for conducting feed gas into the activation chamber; microwave power-generating means for activating raw gas received in the activation chamber; and a reaction chamber provided apart from the activation chamber for reaction of activated gas, the activated gas reaction apparatus so constructed as to satisfy the following formula: <s> BIB004 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> We give necessary and sufficient conditions in terms of connectivity and factors for the existence of hamiltonian cycles and hamiltonian paths and also give sufficient conditions in terms of connectivity for the existence of cycles through any two vertices in bipartite tournaments. <s> BIB005 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> Abstract We give several sufficient conditions on the half-degrees of a bipartite digraph for the existence of cycles and paths of various lengths. Some analogous results are obtained for bipartite oriented graphs and for bipartite tournaments. <s> BIB006 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> Efficient algorithms for finding Hamiltonian cycles, Hamiltonian paths, and cycles through two given vertices in bipartite tournaments are given. <s> BIB007 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 5.6 implies <s> Abstract A digraph obtained by replacing each edge of a complete multipartite graph by an arc or a pair of mutually opposite arcs with the same end vertices is called a complete multipartite graph. Such a digraph D is called ordinary if for any pair X , Y of its partite sets the set of arcs with both end vertices in X ∪ Y coincides with X × Y = {( x , y ): xϵX , yϵY } or Y × X or X × Y ∪ Y × X . We characterize all the pancyclic and vertex pancyclic ordinary complete multipartite graphs. Our charcterizations admit polynomial time algorithms. <s> BIB008
Corollary 5.8 The maximum length of a closed (k 1 , k 2 , ..., k n )-walk of a strongly connected semicomplete digraph T (x 1 , x 2 , ..., x n ) is equal to the number of vertices of a maximum 1-diregular subgraph of the digraph D T (x L. Moser BIB001 and J. Moon strengthened the theorem of Camion mentioned above and proved, respectively, that a strongly connected tournament is pancyclic and, even, vertex pancyclic. The following characterizations of even pancyclic and vertex even pancyclic bipartite tournaments were derived in BIB003 and BIB004 , respectively. Note, that the last characterization was obtained independently in BIB005 as well. Theorem 5.9 A bipartite tournament is even pancyclic as well as vertex even pancyclic if and only if it is Hamiltonian and is not isomorphic to the bipartite tournament B(r, r, r, r) (r = 2, 3, . . .). Considering diregular bipartite tournaments, D. Amar and Y. Manoussakis BIB006 and, independently, J. Z. Wang showed the following: Theorem 5.10 An r-diregular BT is arc even pancyclic unless it is isomorphic to B(r, r, r, r). The analogous result for diregular tournaments was obtained by B. Alspach in (every diregular tournament is arc pancyclic). Z. M. Song studied complementary cycles in BTs which are similar to diregular ones. He proved the following: Theorem 5.11 Let R be a BT with 2k + 1 vertices in each partite set (k ≥ 4). If every vertex of R has outdegree and indegree at least k then for any vertex x in R, R contains a pair of disjoint cycles C, Q such that C includes x and the length of C is at most 6 unless R is isomorphic to B(k Observe that a characterization of even pancyclic (and vertex even pancyclic) semicomplete bipartite digraphs coincides with the above-mentioned one. Indeed, the result follows from the fact that any bipartite tournament obtained by the reorientation of an arc of B(r, r, r, r) is Hamiltonian, and so, vertex even pancyclic. Combining these results with the above described necessary and sufficient conditions for the existence of a Hamiltonian cycle in a semicomplete bipartite digraph (Theorem 5.3) we obtain a polynomial characterization for the above properties. A characterization of pancyclic (and vertex pancyclic) ordinary m-partite (m ≥ 3) tournaments was established in . As opposed to the characterization of even pancyclic semicomplete bipartite graphs the last one does not imply immediately a characterization of pancyclic (or vertex pancyclic) ordinary semicomplete m-partite digraphs. Indeed, there exist vertex pancyclic ordinary SMDs which contain no Hamiltonian ordinary multipartite tournaments as spanning subgraphs. Such examples are semicomplete m-partite digraphs S m,r with r vertices in each partite set but one and (m − 1)r vertices in the last one (r ≥ 1, m ≥ 3). A semicomplete m-partite digraph is called a complete m-partite digraph if it has the arcs (u, v), (v, u) for any pair u, v in distinct partite sets. S m,r is vertex pancyclic by Theorem 5.12 (see below) and it has no Hamiltonian ordinary m-partite tournament as a spanning subgraph since any Hamiltonian cycle of S m,r must alternate between the largest partite set and the other partite sets and hence it cannot be a subgraph of an ordinary multipartite tournament. An ordinary SMD D is called a zigzag digraph if it has more than four vertices and Observe that any cycle in such a graph has the same number, say s, of vertices from V 1 and V 2 and at least s vertices from V 3 ∪ · · · ∪ V k . Therefore, H has no prehamiltonian cycle, i.e. a cycle containing all vertices of H but one. Observe also that an ordinary 4-partite tournament with more than four vertices is not a pancyclic digraph. Indeed, the single (up to isomorphism) strongly connected tournament with four vertices has no closed directed walk of length five. The following characterization of pancyclic and vertex pancyclic ordinary SBD was obtained in BIB008 . ii) it has a 1-diregular spanning subgraph; iii) it is neither a zigzag digraph nor a 4-partite tournament with at least five vertices. 2) A pancyclic ordinary semicomplete k-partite digraph D is vertex pancyclic if and only if either i) k > 3 or ii) k = 3 and D contains two 2-cycles Z 1 , Z 2 such that Z 1 ∪ Z 2 has vertices in three partite sets. 3) There exists an O n 2.5 / √ log n algorithm for determining whether an ordinary semicomplete k-partite (k ≥ 3) digraph D is pancyclic (vertex pancyclic). The following result conjectured in BIB005 was proved in [15] . 2)There exists a O(n 3 ) algorithm to find a cycle through any set of k vertices in a k-strongly connected bipartite tournament. considered in BIB005 shows that Theorem 5.13 is best possible in terms of connectivity (B is non-k-vertex cyclic). J. Bang-Jensen and Y. Manoussakis [15] raised the following conjecture. Conjecture 5.14 For every fixed k there exists a polynomial algorithm to decide the existence of a cycle through a given set of k vertices in a BT and to find one if it exists. Y. Manoussakis and Z. Tuza BIB007 have already proved this conjecture for k = 2. The situation with k-cyclic ordinary SMD is better. J. Bang-Jensen, G. Gutin and J. Huang derived in a complete characterization of k-cyclic ordinary SMDs. They showed the following: B. Jackson BIB002 suggests that Kelly's conjecture remains valid for diregular BTs, i. e. he raises the following
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> Fundamental concepts Connectedness Path problems Trees Leaves and lobes The axiom of choice Matching theorems Directed graphs Acyclic graphs Partial order Binary relations and Galois correspondences Connecting paths Dominating sets, covering sets and independent sets Chromatic graphs Groups and graphs Bibliography List of concepts Index of names. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> Abstract Given a tournament with n vertices, we consider the number of comparisons needed, in the worst case, to find a permutation υ 1 υ 2 … υ n of the vertices, such that the results of the games υ 1 υ 2 , υ 2 υ 3 ,…, υ n −1 υ n match a prescribed pattern. If the pattern requires all arcs to go forwrd, i.e., υ 1 → υ 2 , υ 2 → υ 3 ,…, υ n −1 → υ n , and the tournament is transitive, then this is essentially the problem of sorting a linearly ordered set. It is well known that the number of comparisons required in this case is at least cn lg n , and we make the observation that O ( n lg n ) comparisons suffice to find such a path in any (not necessarily transitive) tournament. On the other hand, the pattern requiring the arcs to alternate backward-forward-backward, etc., admits an algorithm for which O ( n ) comparisons always suffice. Our main result is the somewhat surprising fact that for various other patterns the complexity (number of comparisons) of finding paths matching the pattern can be cn lg α n for any α between 0 and 1. Thus there is a veritable spectrum of complexities, depending on the prescribed pattern of the desired path. Similar problems on complexities of algorithms for finding Hamiltonian cycles in graphs and directed graphs have been considered by various authors, [2, pp. 142, 148, 149; 4]. <s> BIB002 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> An extensible interlocking structure suitable for tower cranes, scaffolding towers and the like is of multilateral cross-section and has two sets of main members which engage one another in end-to-end relation when the structure is extended. Tie members are pivoted to the main members, with the free ends of the tie members interlocking automatically with the main members during extension to provide diagonal bracing. On retraction the members may be stored on drums or in a rack. <s> BIB003 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> A tournament is a digraph T=(V,E) in which, for every pair of vertices, u & v, exactly one of (u,v), (v,u) is in E. Two classical theorems about tournaments are that every tournament has a Hamiltonian path, and every strongly connected tournament has a Hamiltonian cycle. Furthermore, it is known how to find these in polynomial time. In this paper we discuss the parallel complexity of these problems. Our main result is that constructing a Hamiltonian path in a general tournament and a Hamiltonian cycle in a strongly connected tournament are both in NC. In addition, we give an NC algorithm for finding a Hamiltonian path one fixed endpoint. In finding fast parallel algorithms, we also obtain new proofs for the theorems. <s> BIB004 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> A general method is presented for translating sorting by comparisons algorithms to algorithms that compute a Hamilton path in a tournament. The translation is based on the relation between minimal feedback sets and Hamilton paths in tournaments. It is proven that there is a one to one correspondence between the set of minimal feedback sets and the set of Hamilton paths. In the comparison model, all the tradeoffs for sorting between the number of processors and the number of rounds hold as well for computing Hamilton paths. For the CRCW model, with $O( n )$ processors, we show the following: (i) Two paths in a tournament can be merged in $O(\log \log n)$ time (Valiant’s algorithm [SIAM J. Comput., 4 (1975), pp. 348–355], (ii) a Hamilton path can be computed in $O(\log n)$ time (Cole’s algorithm). This improves a previous algorithm for computing a Hamilton path whose running time was $O(\log^2 n)$ using $O(n^2 )$ processors. <s> BIB005 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> Abstract We give a simple algorithm to transform a Hamiltonian path in a Hamiltonian cycle, if one exists, in a tournament T of order n. Our algorithm is linear in the number of arcs, i.e., of complexity O(m)=O(n2) and when combined with the O(n log n) algorithm of [2] to find a Hamiltonian path in T, it yields an O(n2) algorithm for searching a Hamiltonian cycle in a tournament. Up to now, algorithms for searching Hamiltonian cycles in tournaments were of order O(n3) [3], or O(n2 log n) [5]. <s> BIB006 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> Abstract We describe a polynomial algorithm, which either finds a Hamiltonian path with prescribed initial and terminal vertices in a tournament (in fact, in any semicomplete digraph), or decides that no such path exists. <s> BIB007 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> This paper presents polynomially bounded algorithms for finding a cycle through any two prescribed arcs in a semicomplete digraph and for finding a cycle through any two prescribed vertices in a complete k-partite oriented graph. It is also shown that the problem of finding a maximum transitive subtournament of a tournament and the problem of finding a cycle through a prescribed arc set in a tournament are both NP-complete. <s> BIB008 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Cycles and paths in semicomplete digraphs <s> We propose a parallel algorithm which reduces the problem of computing Hamiltonian cycles in tournaments to the problem of computing Hamiltonian paths. The running time of our algorithm is O(log n) using O(n2/log n) processors on a CRCW PRAM, and O(log n log log n) on an EREW PRAM using O(n2/log n log log n) processors. As a corollary, we obtain a new parallel algorithm for computing Hamiltonian cycles in tournaments. This algorithm can be implemented in time O(log n) using O(n2/log n) processors in the CRCW model, and in time O(log2n) with O(n2/log n log log n) processors in the EREW model. <s> BIB009
What is the complexity of the Hamiltonian path and cycle problems in tournaments? The inductive classical proof of Redei's theorem gives at once a simple O(n 2 ) algorithm for the first problem. Since sorting corresponds to finding a Hamiltonian path in a transitive tournament, we have an O(n log n)-time algorithm in this case. P. Hell and M. Rosenfeld BIB002 obtained an algorithm with the same complexity solving the Hamiltonian path problem for any tournament. The well known proof of Moon's theorem provides an O(n 3 )-time algorithm for the Hamiltonian cycle problem. Y. Manoussakis BIB006 constructed an O(n 2 )-time algorithm for this problem. A parallel algorithm A for a problem with size n is called an N C-algorithm if there are constants k, l such that A can be performed in time O(log k n) on an O(n l ) processors PRAM. We refer the reader to for a discussion of N C-algorithms. D. Soroker BIB004 studies the parallel complexity of the above mentioned problems. He proved the following: Theorem 6.1 There are N C-algorithms for the Hamiltonian path and Hamiltonian cycle problems in tournaments. Another N C -algorithm for the Hamiltonian path problem in tournaments has been obtained by J. Naor BIB003 . As to the Hamiltonian path problem for tournaments, the most effective parallel algorithm is due to A. Bar-Noy and J. Naor BIB005 . They constructed an algorithm performed in time O(log n) on an O(n) processors CRCW PRAM for a tournament containing n vertices. The most effective parallel algorithm for the Hamiltonian cycle problem for tournaments is due to E. Bampis, M. El Haddad, Y. Manoussakis and M. Santha BIB009 . They found a fast parallel procedure which transforms the Hamiltonian cycle problem into the Hamiltonian path one in the following sense: Given a Hamiltonian path in a tournament as input, the procedure constructs a Hamiltonian cycle. The parallel running time of the procedure is O(log n) using O(n 2 / log n) processors in the CRCW model. J. Bang-Jensen, Y. Manoussakis and C. Thomassen BIB007 obtained a polynomial algorithm solving the problem (which appears in BIB001 BIB004 ) of deciding the existence of a Hamiltonian path with prescribed initial and terminal vertices in a tournament. Obviously, the last problem is equivalent to the problem of existence of a Hamiltonian cycle containing a given arc in a tournament. They raised the following: Conjecture 6.2 For each fixed k, there exists a polynomial algorithm for deciding if there exists a Hamiltonian cycle through k prescribed arcs in a tournament. The k-arc cyclic problem is the following: Given k distinct arcs in a digraph D, decide whether D has a cycle through all the arcs. Bang-Jensen and Thomassen BIB008 considered this problem for semicomplete digraphs. They proved: Theorem 6.3 There exists a polynomial algorithm for deciding if two independent arcs lie on a common cycle in a semicomplete digraph. They also showed that if k is part of the input then the above problem is N P -complete.
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 6.4 <s> We consider the problem of finding two disjoint directed paths with prescribed ends in a semicomplete digraph. The problem is NP - complete for general digraphs as proved in [4]. We obtain best possible sufficient conditions in terms of connectivity for a semicomplete digraph to be 2-linked (i.e., it contains two disjoint paths with prescribed ends for any four given endvertices). We also consider the algorithmically equivalent problem of finding a cycle through two given disjoint edges in a semicomplete digraph. For this problem it is shown that if D is a 5–connected semicomplete digraph, then D has a cycle through any two disjoint edges, and this result is best possible in terms of the connectivity. In contrast to this we prove that if T is a 3–connected tournament, then T has a cycle through any two disjoint edges. This is best possible, too. Finally we give best possible sufficient conditions in terms of local connectivities for a tournament to contain a cycle through af given pair of disjoint edges. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> Theorem 6.4 <s> Abstract A directed graph is called ( m , k )-transitive if for every directed path x 0 x 1 … x m there is a directed path y 0 y 1 … y k such that x 0 = y 0 , x m = y k , and { y i |0⩽ i ⩽ k } ⊂{ x i |0⩽ i ⩽ m }. We describe the structure of those ( m , 1)-transitive and (3,2)-transitive directed graphs in which each pair of vertices is adjacent by an arc in at least one direction, and present an algorithm with running time O( n 2 ) that tests ( m, k )-transitivity in such graphs on n vertices for every m and k =1, and for m =3 and k =2. <s> BIB002
The k-arc cyclic problem is N P -complete even for semicomplete and semicomplete bipartite digraphs. Sufficient conditions for semicomplete digraphs and tournaments to be 2-arc cyclic are studied in BIB001 where the following theorem is proved. Theorem 6.5 Every 5-strongly connected semicomplete digraph is 2-arc cyclic; every 3-connected tournament is 2-arc cyclic. In BIB001 it is noted that both results are best possible in terms of the required connectivity. A digraph D is said to be transitive if (x, y), (y, z) ∈ A(D) implies (x, z) ∈ A(D). This notion has been generalized by F. Harary (cf. BIB002 ) as follows: D is (m, k)-transitive (m > k ≥ 1) if for every path P of length m there exists a path Q of length k with the same endvertices, such that V (Q) ⊂ V (P ). A. Gyárfás, M.S. Jacobson and L.F. Kinch studied (m, k)-transitivity and obtained a characterization of (m, 1)-transitive tournaments for m ≥ 2. Using another approach, Z. Tuza BIB002 characterized (m, 1)-transitive semicomplete digraphs for every m ≥ 2 and k = 1. Obviously, this characterization provides an O(n 2 )-time algorithm for finding the minimum m such that a given semicomplete digraph D is (m,1)-transitive. Z. Tuza BIB002 also obtained a characterization of (3,2)-transitive semicomplete digraphs.
Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> The number of paths and cycles in MTs <s> This invention relates to a printing device of the type having a carriage which supports a printing roll. In order to perform the printing operation, the carriage is pivotally and slidably mounted on a slide rod. An arm of the carriage is mounted in a housing and is provided with a guide roller that is located in the same vertical plane as the printing roll. The guide roller cooperates with a guide means that extends parallel to the slide rod to keep the printing roll in engagement with a printing anvil and to enable the printing roll to swing upwardly into a raised position when arriving at one of the ends of the slide rod. <s> BIB001 </s> Cycles and paths in semicomplete multipartite digraphs, theorems and algorithms: a survey <s> The number of paths and cycles in MTs <s> Solving an old conjecture of Szele we show that the maximum number of directed Hamiltonian paths in a tournament onn vertices is at mostc · n3/2· n!/2n−1, wherec is a positive constant independent ofn. <s> BIB002
The main problems in the topic of this section are the following: 1) Find the maximum possible number of s-cycles (s-paths) in a MT with a given number of vertices in each partite set. 2) Find the minimum possible number of s-cycles in a strongly connected MT with a given number of vertices in each partite set. These problems were completely solved only in some special cases. The first problem (on cycles) was solved for tournaments when s = 3, 4 and for BTs when m = 4 (cf. ). Solving an old conjecture of T. Szele , N. Alon BIB002 showed: The maximum number, P (n), of Hamiltonian paths in a tournament on n vertices satisfies P (n) ≤ cn 1.5 n!/2 where c is independent of n. The short proof of Theorem 7.1 is based on Minc's Conjecture BIB001 on permanents of (0,1)-matrices proved by Bregman . Szele proved that P (n) ≥ n!/2 n−1 and hence the gap between the upper and lower bounds for P (n) is only O(n 1.5 ). It would be interesting to close this gap and determine P (n) up to a constant factor. The second problem was completely solved for tournaments for all s and for BTs when s = 4 (cf. ). For MTs the following two results were obtained. Theorem 7.2 Let T be a strongly connected m-partite tournament , m ≥ 3. Then T contains at least m − 2 3-cycles. Theorem 7.3 Let G be a complete m-partite graph, m ≥ 3, which is not isomorphic to K 2,2,,...,2 for odd m. Then there exists a strong orientation of G with exactly m − 2 3-cycles.
Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> Abstract A conceptual analysis of the classical information theory of Shannon (1948) shows that this theory cannot be directly generalized to the usual quantum case. The reason is that in the usual quantum mechanics of closed systems there is no general concept of joint and conditional probability. Using, however, the generalized quantum mechanics of open systems (A. Kossakowski 1972) and the generalized concept of observable (“semiobservable”, E.B. Davies and J.T. Lewis 1970) it is possible to construct a quantum information theory being then a straightforward generalization of Shannon's theory. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> On the program it says this is a keynote speech--and I don't know what a keynote speech is. I do not intend in any way to suggest what should be in this meeting as a keynote of the subjects or anything like that. I have my own things to say and to talk about and there's no implication that anybody needs to talk about the same thing or anything like it. So what I want to talk about is what Mike Dertouzos suggested that nobody would talk about. I want to talk about the problem of simulating physics with computers and I mean that in a specific way which I am going to explain. The reason for doing this is something that I learned about from Ed Fredkin, and my entire interest in the subject has been inspired by him. It has to do with learning something about the possibilities of computers, and also something about possibilities in physics. If we suppose that we know all the physical laws perfectly, of course we don't have to pay any attention to computers. It's interesting anyway to entertain oneself with the idea that we've got something to learn about physical laws; and if I take a relaxed view here (after all I 'm here and not at home) I'll admit that we don't understand everything. The first question is, What kind of computer are we going to use to simulate physics? Computer theory has been developed to a point where it realizes that it doesn't make any difference; when you get to a universal computer, it doesn't matter how it's manufactured, how it's actually made. Therefore my question is, Can physics be simulated by a universal computer? I would like to have the elements of this computer locally interconnected, and therefore sort of think about cellular automata as an example (but I don't want to force it). But I do want something involved with the <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> We show that a set of gates that consists of all one-bit quantum gates (U(2)) and the two-bit exclusive-or gate (that maps Boolean values (x,y) to (x,x ⊕y)) is universal in the sense that all unitary operations on arbitrarily many bits n (U(2 n )) can be expressed as compositions of these gates. We investigate the number of the above gates required to implement other gates, such as generalized Deutsch-Toffoli gates, that apply a specific U(2) transformation to one input bit if and only if the logical AND of all remaining input bits is satisfied. These gates play a central role in many proposed constructions of quantum computational networks. We derive upper and lower bounds on the exact number of elementary gates required to build up a variety of two- and three-bit quantum gates, the asymptotic number required for n-bit Deutsch-Toffoli gates, and make some observations about the number required for arbitrary n-bit unitary operations. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> We explore quantum search from the geometric viewpoint of a complex projective space $CP$, a space of rays. First, we show that the optimal quantum search can be geometrically identified with the shortest path along the geodesic joining a target state, an element of the computational basis, and such an initial state as overlaps equally, up to phases, with all the elements of the computational basis. Second, we calculate the entanglement through the algorithm for any number of qubits $n$ as the minimum Fubini-Study distance to the submanifold formed by separable states in Segre embedding, and find that entanglement is used almost maximally for large $n$. The computational time seems to be optimized by the dynamics as the geodesic, running across entangled states away from the submanifold of separable states, rather than the amount of entanglement itself. <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> Scalability of a quantum computation requires that the information be processed on multiple subsystems. However, it is unclear how the complexity of a quantum algorithm, quantified by the number of entangling gates, depends on the subsystem size. We examine the quantum circuit complexity for exactly universal computation on many d-level systems (qudits). Both a lower bound and a constructive upper bound on the number of two-qudit gates result, proving a sharp asymptotic of theta(d(2n)) gates. This closes the complexity question for all d-level systems (d finite). The optimal asymptotic applies to systems with locality constraints, e.g., nearest neighbor interactions. <s> BIB005 </s> Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> Finally, here is a modern, self-contained text on quantum information theory suitable for graduate-level courses. Developing the subject 'from the ground up' it covers classical results as well as major advances of the past decade. Beginning with an extensive overview of classical information theory suitable for the non-expert, the author then turns his attention to quantum mechanics for quantum information theory, and the important protocols of teleportation, super-dense coding and entanglement distribution. He develops all of the tools necessary for understanding important results in quantum information theory, including capacity theorems for classical, entanglement-assisted, private and quantum communication. The book also covers important recent developments such as superadditivity of private, coherent and Holevo information, and the superactivation of quantum capacity. This book will be warmly welcomed by the upcoming generation of quantum information theorists and the already established community of classical information theorists. <s> BIB006 </s> Quantum games: a review of the history, current state, and interpretation <s> Introduction <s> When elementary quantum systems, such as polarized photons, are used to transmit digital information, the uncertainty principle gives rise to novel cryptographic phenomena unachievable with traditional transmission media, e.g. a communications channel on which it is impossible in principle to eavesdrop without a high probability of disturbing the transmission in such a way as to be detected. Such a quantum channel can be used in conjunction with ordinary insecure classical channels to distribute random key information between two users with the assurance that it remains unknown to anyone else, even when the users share no secret information initially. We also present a protocol for coin-tossing by exchange of quantum messages, which is secure against traditional kinds of cheating, even by an opponent with unlimited computing power, but ironically can be subverted by use of a still subtler quantum phenomenon, the Einstein-Podolsky-Rosen paradox. <s> BIB007
Roots of the theory of quantum information lie in the ideas of Wisener and Ingarden BIB001 . These authors proposed ways to incorporate the uncertainty or entropy of information within the uncertainty inherent in quantum mechanical processes. Uncertainty is a fundamental concept in both physics and the theory of information, hence serving as the natural link between the two disciplines. Uncertainty itself is defined in terms of probability distributions. Every quantum physical object produces a probability distribution when its state is measured. More precisely, a general state of an m-state quantum object is an element of the m-dimensional complex projective Hilbert space CP m , say v = (v 1 , . . . , v m ) . Upon measurement with respect to the observable states of the quantum object (which are the elements of an orthogonal basis of CP m ), v will produce the probability distribution over the observable states. Hence, a notion of quantum entropy or uncertainty may be defined that coincides with the corresponding notion in classical information theory . Quantum information theory was further developed as the theory of quantum computation by Feynman and Deutsch BIB002 . Feynman outlines a primitive version of what is now known as an n qubit quantum computer , that is, a physical system that emulates any unitary function where CP 1 is the two-dimensional complex projective Hilbert space modeling the two-state quantum system or the qubit, in a way that Q can be expressed as a tensor product of unitary functions Q j (also known as quantum logic gates) that act only on one or two qubits BIB003 BIB005 . Feynman's argument showed that it is possible to simulate two-state computations by Bosonic two-state quantum systems. Quantum computers crossed the engineering and commercialization thresholds in this decade with the Canadian technology company D-wave producing and selling a quantum annealing based quantum computer, and establish technology industry giants like IBM, Intel, Google, and Microsoft devoting financial resources to initiate their own quantum computing efforts. More generally, quantum information theory has made great strides starting in the 1980's in form of quantum communication protocols where, roughly speaking, one studies the transmission of quantum information over channels and applications. A milestone of quantum information theory is provably secure public key distribution which uses the uncertainty inherent in quantum mechanics to guarantee the security. This idea was first proposed by Charles Bennett and Gilles Brassard in 1984 at a conference in Bengaluru, India, and recently appeared in a journal BIB007 . Several companies including Toshiba and IDquantique offer commercial devices that can used to set up quantum cryptography protocols. While the literature on quantum information theory is vast, we refer the reader to books BIB006 for further survey of quantum information theory. In the emerging field of quantum information technology, optimal implementation of quantum information processes will be of fundamental importance. To this end, the classic problem of optimizing a function's value will play a crucial role, with one looking to optimize the functional description of the quantum processes, as in BIB004 , for example. Generalizing further, solutions to the problem of simultaneous optimization of two or more functions will be even more crucial given the uncertainty inherent in quantum systems. This generalized, multi-objective optimization problem forms the essence of non-cooperative game theory, where notional players are introduced as having the said functions as payoff or objective functions. The original single-objective optimization problem can also be studied as a single player game or what is also known as a dynamic game. We will give a mathematically formal discussion of non-cooperative games and their quantum mechanical counterparts in sections 2 and 3, followed by a discussion in section 4 on the history of how such quantum games have been viewed and criticized in the literature in the context of the optimal implementation of quantum technologies as well the quantum mechanical implementation of non-cooperative games. In section 4.1 we contrast cooperative and non-cooperative games, and in section 4.2, we introduce a new perspective on quantum entanglement as a mechanism for establishing social equilibrium. Section 5 gives an overview of quantum games and quantum algorithms and communication protocols, and section 6 concerns Bell's inequalities and their role in quantum Bayesian games, and sections 7 through 10 concern classical and quantum versions of stochastic and dynamic games. We give the current state-of-affairs in the experimental realization of quantum games in section 11, followed by section 12 that discusses potential future applications of quantum games.
Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative games <s> One may define a concept of an n -person game in which each player has a finite set of pure strategies and in which a definite set of payments to the n players corresponds to each n -tuple of pure strategies, one strategy being taken for each player. For mixed strategies, which are probability distributions over the pure strategies, the pay-off functions are the expectations of the players, thus becoming polylinear forms … <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative games <s> Abstract : Kakutani's Fixed Point Theorem states that in Euclidean n-space a closed point to (non-void) convex set map of a convex compact set into itself has a fixed point. Kakutani showed that this implied the minimax theorem for finite games. The object of this note is to point out that Kakutani's theorem may be extended to convex linear topological spaces, and implies the minimax theorem for continuous games with continuous payoff as well as the existence of Nash equilibrium points. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative games <s> This article is a complement to an earlier one,1 in which at least two questions have been left in the shadows. Here we shall focus our attention on them. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative games <s> Preface 1. Decision-Theoretic Foundations 1.1 Game Theory, Rationality, and Intelligence 1.2 Basic Concepts of Decision Theory 1.3 Axioms 1.4 The Expected-Utility Maximization Theorem 1.5 Equivalent Representations 1.6 Bayesian Conditional-Probability Systems 1.7 Limitations of the Bayesian Model 1.8 Domination 1.9 Proofs of the Domination Theorems Exercises 2. Basic Models 2.1 Games in Extensive Form 2.2 Strategic Form and the Normal Representation 2.3 Equivalence of Strategic-Form Games 2.4 Reduced Normal Representations 2.5 Elimination of Dominated Strategies 2.6 Multiagent Representations 2.7 Common Knowledge 2.8 Bayesian Games 2.9 Modeling Games with Incomplete Information Exercises 3. Equilibria of Strategic-Form Games 3.1 Domination and Ratonalizability 3.2 Nash Equilibrium 3.3 Computing Nash Equilibria 3.4 Significance of Nash Equilibria 3.5 The Focal-Point Effect 3.6 The Decision-Analytic Approach to Games 3.7 Evolution. Resistance. and Risk Dominance 3.8 Two-Person Zero-Sum Games 3.9 Bayesian Equilibria 3.10 Purification of Randomized Strategies in Equilibria 3.11 Auctions 3.12 Proof of Existence of Equilibrium 3.13 Infinite Strategy Sets Exercises 4. Sequential Equilibria of Extensive-Form Games 4.1 Mixed Strategies and Behavioral Strategies 4.2 Equilibria in Behavioral Strategies 4.3 Sequential Rationality at Information States with Positive Probability 4.4 Consistent Beliefs and Sequential Rationality at All Information States 4.5 Computing Sequential Equilibria 4.6 Subgame-Perfect Equilibria 4.7 Games with Perfect Information 4.8 Adding Chance Events with Small Probability 4.9 Forward Induction 4.10 Voting and Binary Agendas 4.11 Technical Proofs Exercises 5. Refinements of Equilibrium in Strategic Form 5.1 Introduction 5.2 Perfect Equilibria 5.3 Existence of Perfect and Sequential Equilibria 5.4 Proper Equilibria 5.5 Persistent Equilibria 5.6 Stable Sets 01 Equilibria 5.7 Generic Properties 5.8 Conclusions Exercises 6. Games with Communication 6.1 Contracts and Correlated Strategies 6.2 Correlated Equilibria 6.3 Bayesian Games with Communication 6.4 Bayesian Collective-Choice Problems and Bayesian Bargaining Problems 6.5 Trading Problems with Linear Utility 6.6 General Participation Constraints for Bayesian Games with Contracts 6.7 Sender-Receiver Games 6.8 Acceptable and Predominant Correlated Equilibria 6.9 Communication in Extensive-Form and Multistage Games Exercises Bibliographic Note 7. Repeated Games 7.1 The Repeated Prisoners Dilemma 7.2 A General Model of Repeated Garnet 7.3 Stationary Equilibria of Repeated Games with Complete State Information and Discounting 7.4 Repeated Games with Standard Information: Examples 7.5 General Feasibility Theorems for Standard Repeated Games 7.6 Finitely Repeated Games and the Role of Initial Doubt 7.7 Imperfect Observability of Moves 7.8 Repeated Wines in Large Decentralized Groups 7.9 Repeated Games with Incomplete Information 7.10 Continuous Time 7.11 Evolutionary Simulation of Repeated Games Exercises 8. Bargaining and Cooperation in Two-Person Games 8.1 Noncooperative Foundations of Cooperative Game Theory 8.2 Two-Person Bargaining Problems and the Nash Bargaining Solution 8.3 Interpersonal Comparisons of Weighted Utility 8.4 Transferable Utility 8.5 Rational Threats 8.6 Other Bargaining Solutions 8.7 An Alternating-Offer Bargaining Game 8.8 An Alternating-Offer Game with Incomplete Information 8.9 A Discrete Alternating-Offer Game 8.10 Renegotiation Exercises 9. Coalitions in Cooperative Games 9.1 Introduction to Coalitional Analysis 9.2 Characteristic Functions with Transferable Utility 9.3 The Core 9.4 The Shapkey Value 9.5 Values with Cooperation Structures 9.6 Other Solution Concepts 9.7 Colational Games with Nontransferable Utility 9.8 Cores without Transferable Utility 9.9 Values without Transferable Utility Exercises Bibliographic Note 10. Cooperation under Uncertainty 10.1 Introduction 10.2 Concepts of Efficiency 10.3 An Example 10.4 Ex Post Inefficiency and Subsequent Oilers 10.5 Computing Incentive-Efficient Mechanisms 10.6 Inscrutability and Durability 10.7 Mechanism Selection by an Informed Principal 10.8 Neutral Bargaining Solutions 10.9 Dynamic Matching Processes with Incomplete Information Exercises Bibliography Index <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative games <s> Ken Binmore's previous game theory textbook, Fun and Games (D.C. Heath, 1991), carved out a significant niche in the advanced undergraduate market; it was intellectually serious and more up-to-date than its competitors, but also accessibly written. Its central thesis was that game theory allows us to understand many kinds of interactions between people, a point that Binmore amply demonstrated through a rich range of examples and applications. This replacement for the now out-of-date 1991 textbook retains the entertaining examples, but changes the organization to match how game theory courses are actually taught, making Playing for Real a more versatile text that almost all possible course designs will find easier to use, with less jumping about than before. In addition, the problem sections, already used as a reference by many teachers, have become even more clever and varied, without becoming too technical. Playing for Real will sell into advanced undergraduate courses in game theory, primarily those in economics, but also courses in the social sciences, and serve as a reference for economists. <s> BIB005
Non-cooperative game theory is the mathematical foundation of making optimal decisions in competitive situations based on available information. The written philosophical foundations of Game Theory trace back to at least the great works of Sun Tsu (The Art of War), circa 500 BCE in China, and Chanakya (Arthashastra), circa 250 BCE in India. Sun Tsu captures the essence of game-theoretic thinking in the following (translated ) lines from The Art of War: Knowing the other and knowing oneself, In one hundred battle no danger, Not knowing the other and knowing oneself, One victory for one loss, Not knowing the other and not knowing oneself, In every battle certain defeat (Denma translation). In short, each competitor or player, in the competitive situation or game, should know the preferences of each player over the outcomes of the game, and knowing this information is sufficient for each player to make optimal decisions or strategic choices. The word "optimal" requires further elaboration. In noncooperative game theory, there are two ways to utilize it. First is via the notion of Nash equilibrium, proposed by Nobel Laureate John Nash BIB001 , where each player's strategic choice, given the strategic choices of all the other players, produces an outcome of the game that maximizes the player's preferences over the outcomes. In other words, unilateral deviation by the player to another strategic choice will produce an outcome which is less preferable to the player. Further yet, one can say that each player's strategic choice is a best response to every other. The second way the word "optimal" is used in game theory is via the notion of Pareto-optimality where the strategic choices made by the players produce an outcome of the game that maximizes the preferences of every player. In other words, a unilateral deviation by any one player to some other strategic choice will produce an outcome which is less preferred by some player. If the adversely affected player is also the one who unilaterally deviated, then the Pareto-optimal outcome is also a Nash equilibrium. Note that Nash equilibrium is a more likely outcome in a non-cooperative game than a Pareto-optimal one in the sense that on average, or in repeated games, players' strategy choices will tend toward the Nash equilibrium. Formalizing, we say that an N player, non-cooperative game in normal form is a function Γ Γ : with the additional feature of the notion of non-identical preferences over the elements of the set of outcomes O, for every "player" of the game. The preferences are a pre-ordering of the elements of O, that is, for l, m, n ∈ O m m, and l m and m n =⇒ l n. where the symbol denotes "of less or equal preference". Preferences are typically quantified numerically for the ease of calculation of the payoffs. To this end, functions Γ i are introduced which act as the payoff function for each player i and typically map elements of O into the real numbers in a way that preserves the preferences of the players. That is, is replaced with ≤ when analyzing the payoffs. The factor S i in the domain of Γ is said to be the strategy set of player i, and a play of Γ is an n-tuple of strategies, one per player, producing a payoff to each player in terms of his preferences over the elements of O in the image of Γ. A Nash equilibrium is a play of Γ in which every player employs a strategy that is a best reply, with respects to his preferences over the outcomes, to the strategic choice of every other player. In other words, unilateral deviation from a Nash equilibrium by any one player in the form of a different choice of strategy will produce an outcome which is less preferred by that player than before. Following Nash, we say that a play p of Γ counters another play p if Γ i (p ) ≥ Γ i (p) for all players i, and that a self-countering play is an (Nash) equilibrium. Let C p denote the set of all the plays of Γ that counter p. Denote n i=1 S i by S for notational convenience, and note that C p ⊂ S and therefore C p ∈ 2 S . Further note that the game Γ can be factored as where to any play p the map Γ C associates its countering set C p via the payoff functions Γ i . The set-valued map Γ C may be viewed as a pre-processing stage where players seek out a self-countering play, and if one is found, it is mapped to its corresponding outcome in O by the function E. The condition for the existence of a self-countering play, and therefore of a Nash equilibrium, is that Γ C have a fixed point, that is, an element p * ∈ S such that p * ∈ C p * . In a general set-theoretic setting for non-cooperative games, the map Γ C may not have a fixed point. Hence, not all non-cooperative games will have a Nash equilibrium. However, according to Nash's theorem, when the S i are finite and the game is extended to its mixed version, that is, the version in which randomization via probability distributions is allowed over the elements of all the S i , as well as over the elements of O, then Γ C has at least one fixed point and therefore at least one Nash equilibrium. Formally, given a game Γ with finite S i for all i, its mixed version is the product function Λ : where ∆(S i ) is the set of probability distributions over the i th player's strategy set S i , and the set ∆(O) is the set of probability distributions over the outcomes O. Payoffs are now calculated as expected payoffs, that is, weighted averages of the values of Γ i , for each player i, with respect to probability distributions in ∆(O) that arise as the product of the plays of Λ. Denote the expected payoff to player i by the function Λ i . Also, note that Λ restricts to Γ. In such n-player games, at least one Nash equilibrium play is guaranteed to exist as a fixed point of Λ via Kakutani's fixed-point theorem . Kakutani's fixed-point theorem: Let S ⊂ R n be nonempty, bounded, closed, and convex, and let F : S → 2 S be an upper semi-continuous set-valued mapping such that F (s) is non-empty, closed, and convex for all s ∈ S. Then there exists some s * ∈ S such that s * ∈ F (s * ). To see this, make S = n i=1 ∆(S i ). Then S ⊂ R n and S is non-empty, bounded, and closed because it is a finite product of finite non-empty sets. The set S is also convex because its the convex hull of the elements of a finite set. Next, let C p be the set of all plays of Λ that counter the play p. Then C p is nonempty, closed, and convex. Further, C p ⊂ S and therefore C p ∈ 2 S . Since Λ is a game, it factors according to (7) where the map Λ C associates a play to its countering set via the payoff functions Λ i . Since Λ i are all continuous, Λ C is continuous. Further, Λ C (s) is non-empty, closed, and convex for all s ∈ S (we will establish the convexity of Λ C (s) below; the remaining conditions are also straightforward to establish). Hence, Kakutani's theorem applies and there exists an s * ∈ S that counters itself, that is, s * ∈ Λ C (s * ), and is therefore a Nash equilibrium. The function E Π simply maps s * to ∆(O) as the product probability distribution from which the Nash equilibrium expected payoff is computed for each player. The convexity of the Λ C (s) = C p is straight forward to show. Let q, r ∈ C p . Then for all i. Now let 0 ≤ µ ≤ 1 and consider the convex combination µq + (1 − µ)r which we will show to be in C p . First note that µq + (1 − µ)r ∈ S because S is the product of the convex sets ∆(S i ). Next, since the Λ i are all linear, and because of the inequalities in (10) and the restrictions on the values of µ, whereby µq + (1 − µ)r ∈ C p and C p is convex. Going back to the game Γ in (5) defined in the general set-theoretic setting, certainly Kakutani's theorem would apply to Γ if the conditions are right, such as when the image set of Γ is pre-ordered and Γ is linear. Kakutani's fixed-point theorem can be generalized to include subsets S of convex topological vector spaces, as was done by Glicksberg in BIB002 . The following is a paraphrased but equivalent statement of Glicksberg's fixed-point theorem (the term "linear space" in the original statement of Glicksberg's theorem is equivalent to the term vector space): Glicksberg's fixed-point theorem: Let H be nonempty, compact, convex subset of a convex Hausdorff topological vector space and let Φ : H → 2 H be an upper semi-continuous set-valued mapping such that Φ(h) is non-empty and convex for all h ∈ H. Then there exists some h * ∈ H such that h * ∈ Φ(h * ). Using Glicksberg's fixed-point theorem, one can show that Nash equilibrium exists in games where the strategy sets are infinite or possibly also un-countably infinite. Non-cooperative game theory has been an immensely successful mathematical model for studying scientific and social phenomena. In particular, it has offered key insights into equilibrium and optimal behavior in economics, evolutionary biology, and politics. As with any established subject, game theory has a vast literature available. However, we refer the reader to BIB005 BIB004 . Given the successful interface of non-cooperative game theory with several other subjects, it is no wonder then that physicist have explored the possibility of using Game Theory to model physical processes as games and study their equilibrium behaviors. The first paper that the author's are aware of in which aspects of quantum physics, wave mechanics in particular, were viewed as games is BIB003 .
Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative quantum games <s> We investigate the quantization of non-zero sum games. For the particular case of the Prisoners' Dilemma we show that this game ceases to pose a dilemma if quantum strategies are allowed for. We also construct a particular quantum strategy which always gives reward if played against any classical strategy. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative quantum games <s> Poker has become a popular pastime all over the world. At any given moment one can find tens, if not hundreds, of thousands of players playing poker via their computers on the major on-line gaming sites. Indeed, according to the Vancouver, B.C. based pokerpulse.com estimates, more than 190 million US dollars daily is bet in on-line poker rooms. But communication and computation are changing as the relentless application of Moore's Law brings computation and information into the quantum realm. The quantum theory of games concerns the behavior of classical games when played in the coming quantum computing environment or when played with quantum information. In almost all cases, the"quantized"versions of these games afford many new strategic options to the players. The study of so-called quantum games is quite new, arising from a seminal paper of D. Meyer \cite{Meyer} published in Physics Review Letters in 1999. The ensuing near decade has seen an explosion of contributions and controversy over what exactly a quantized game really is and if there is indeed anything new for game theory. With the settling of some of these controversies \cite{Bleiler}, it is now possible to fully analyze some of the basic endgame models from the game theory of Poker and predict with confidence just how the optimal play of Poker will change when played in the coming quantum computation environment. The analysis here shows that for certain players,"entangled"poker will allow results that outperform those available to players"in real life". <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Non-cooperative quantum games <s> Two qubit quantum computations are viewed as two player, strictly competitive games and a game-theoretic measure of optimality of these computations is developed. To this end, the geometry of Hilbert space of quantum computations is used to establish the equivalence of game-theoretic solution concepts of Nash equilibrium and mini-max outcomes in games of this type, and quantum mechanisms are designed for realizing these mini-max outcomes. <s> BIB003
A formal merger of non-cooperative game theory and quantum computing was initiated in by D. Meyer, who was motivated to study efficient quantum algorithms, and to this end, proposed a game-theoretic model or P1 Q P2 Figure 1: A) The Penny Flip game in normal form, with N and F representing the players' strategic choices of Flip and No-Flip respectively. B) The quantum circuit for this game, with P1 and P2 being quantum strategies of Player 1, and Q being the quantum strategy of Player 2, which when chosen to be the Hadamard gate H, allows Player 2 to always win the game when the input qubit state is either |0 or |1 . for quantum algorithms. To this end, his focus of study was the situation where in a particular two player game, one of the players had access to quantum strategies. Meyer in fact did not introduce the term "quantum game" in his work; rather, this was done by another group of authors whose work will be discussed shortly. Meyer defined a quantum strategy to be a single qubit logic gate in the quantum computation for which the game model was constructed. The particular game he considers is the Penny Flip game of Figure 1A ), which he then realizes as a single qubit quantum circuit of Figure 1B ) in which the first player employs P1 and P2 from the restricted set of quantum operations whereas Player 2 is allowed to employ, in particular, the Hadamard operation both on either the qubit state |0 or |1 . When the quantum circuit is played out with respect to the computational basis for the Hilbert space, one sees that Player 2 always wins the game. A similar two player game model was applied to quantum algorithms such as Simon's and Grover's algorithms. Meyer showed that in this setting, the player with access to a proper quantum strategy (and not simply a classical one residing inside a quantum one) would always win this game. He further showed that if both players had access to proper quantum strategies, then in a strictly competitive or zero-sum game (where the preferences of the players over the outcomes are diametrically opposite), a Nash equilibrium need not exist. However, in the case where players are allowed to choose their quantum strategies with respect to a probability distribution, that is, employ mixed quantum strategies, Meyer used Glicksberg's fixed point theorem to show that in this situation Nash equilibrium would always exist. Meyer's work also provides a way to study equilibrium behavior of quantum computational mechanisms. The term quantum game appears to have been first used by Eisert, Wilkens, and Lewenstein in their paper BIB001 which was published soon after Meyer's work. These authors were interested in, as they put it, "... the quantization of non-zero sum games". At face value, this expression can create controversy (and it has), since quantization is a physical process whereas a game is primarily an abstract concept. However, Chess, Poker, and Football are examples of games that can be implemented physically. It becomes clear upon reading the paper that the authors' goal is to investigate the consequences of a non-cooperative game implemented quantum physically. More accurately, Eisert et al. give a quantum computational implementation of Prisoner's Dilemma. This implementation is reproduced in Figure 2 . Eisert et al. show that in their quantum computational implementation of the non-strictly competitive game of Prisoner's Dilemma, when followed by quantum measurement, players can achieve a Nash equilibrium that is also Pareto-optimal. One should view the "EWL quantization protocol" for Prisoner's Dilemma as an extension of the original game to include higher order randomization via quantum superposition and entanglement followed by measurement BIB002 similar to the way game theorists have traditionally extended (or physically implemented) a game to include randomization via probability distributions. And indeed, Eisert et al. ensure that their quantum game restricts to the original Prisoner's Dilemma. Inspired by the EWL quantization protocol, Marinatto and Weber proposed the "MW quantization protocol" in . As Figure 3 shows, the MW protocol differs from the EWL protocol in only the absence of the dis-entangling gate. Whereas Meyer's seminal work laid down the mathematical foundation of quantum games via its Nash equilibrium result using a fixed-point theorem, the works of Eisert et al. and Marinatto et al. have been the dominant protocols for quantization of games. But before discussing the impact of these works on the subject of quantum game theory, it is pertinent to introduce a mathematically formal definition of a noncooperative quantum game in normal form that is consistent with these authors' perspectives. An n-player quantum game in normal form arises from (5) when one introduces quantum physically relevant restrictions. We define a pure strategy quantum game (in normal form) to be any unitary function where CP d i is the d i -dimensional complex projective Hilbert space of pure quantum states that constitutes player i's pure quantum strategies, as well as the set of outcomes with a notion of non-identical preferences defined over its elements, one per player BIB003 . Figure 4 captures this definition as a quantum circuit diagram.
Quantum games: a review of the history, current state, and interpretation <s> Entangling gate <s> An algorithm is developed to compute the complete CS decomposition (CSD) of a partitioned unitary matrix. Although the existence of the CSD has been recognized since 1977, prior algorithms compute only a reduced version (the 2-by-1 CSD) that is equivalent to two simultaneous singular value decompositions. The algorithm presented here computes the complete 2-by-2 CSD, which requires the simultaneous diagonalization of all four blocks of a unitary matrix partitioned into a 2-by-2 block structure. The algorithm appears to be the only fully specified algorithm available. The computation occurs in two phases. In the first phase, the unitary matrix is reduced to bidiagonal block form, as described by Sutton and Edelman. In the second phase, the blocks are simultaneously diagonalized using techniques from bidiagonal SVD algorithms of Golub, Kahan, and Demmel. The algorithm has a number of desirable numerical features. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Entangling gate <s> With respect to probabilistic mixtures of the strategies in non-cooperative games, quantum game theory provides guarantee of fixed-point stability, the so-called Nash equilibrium. This permits players to choose mixed quantum strategies that prepare mixed quantum states optimally under constraints. We show here that fixed-point stability of Nash equilibrium can also be guaranteed for pure quantum strategies via an application of the Nash embedding theorem, permitting players to prepare pure quantum states optimally under constraints. <s> BIB002
Quantum strategy of Player A Quantum strategy of Player B Figure 3 : Quantum circuit for the MW quantization scheme. Although similar to the EWL scheme, this scheme does not restrict to the original classical game due to the lack of the dis-entangling operation before quantum measurement. This is due to the fact that the classical game is encoded into the Hilbert space as the elements of the computational basis. A mixed quantum game would then be any function where ∆ represents the set of probability distributions over the argument. Our definition of a quantum game in both and (13) is consistent with Meyer's perspective in the sense that it allows one to constrained optimize a quantum mechanism by defining payoff functions before measurement, and it is consistent with the EWL perspective if one a defines the payoff functions after measurement as expected value. As mentioned earlier, Meyer used Glicksberrg's fixed point theorem to establish the guarantee of Nash equilibrium in the mixed quantum game R. This is not surprising given that probabilistic mixtures form a convex structure, which is an essential ingredient for fixed-point theorems to hold on "flat" manifolds such as R n . However, it was only very recently that two of the current authors showed that Nash equilibrium via fixed-point theorem can be guaranteed in the quantum game Q BIB002 . These authors used the Riemannian manifold structure of CP n to invoke John Nash's other, mathematically more popular theorem known as the Nash embedding theorem: Nash embedding theorem: Every compact Riemannian manifold can be (isometrically) embedded into R m for a sufficiently large m. The Nash embedding theorem tells us that CP n is diffeomorphic to its image under a length preserving map into R m . With suitable considerations in place, it follows that Kakutani's theorem applies to the image of CP n in R m . Now, tracing the diffeomorphim back to CP n guarantees the existence of Nash equilibrium in the pure quantum game Q. Another key insight established in BIB002 is that just as in classical games, linearity of the payoff functions is a fundamental requirement for guaranteeing the existence of Nash equilibrium in pure quantum games. Hence, quantization of games such as the EWL protocol, in which the payoff is the expected value computed after quantum measurement, cannot guarantee Nash equilibrium. On the other hand, the problem of pure Q A B Figure 4 : A) A n-player quantum game Q as a n qubit quantum logic gate, with the provision that preferences are defined over the elements of the computational basis of the Hilbert space, one per player. B) An example of playing the quantum game Q (equivalently, implementing the quantum logic gate) as a quantum circuit comprised of only one qubit logic gates (strategies) and two qubit logic gates (quantum mediated communication) using matrix decomposition techniques such as the cosine-sine decomposition BIB001 . state preparation, when viewed as a quantum game with the overlap (measured by the inner-product) of quantum states as the payoff function, guarantees Nash equilibrium.
Quantum games: a review of the history, current state, and interpretation <s> Criticism of quantum games -a discussion <s> We consider two aspects of quantum game theory: the extent to which the quantum solution solves the original classical game, and to what extent the new solution can be obtained in a classical model. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Criticism of quantum games -a discussion <s> Poker has become a popular pastime all over the world. At any given moment one can find tens, if not hundreds, of thousands of players playing poker via their computers on the major on-line gaming sites. Indeed, according to the Vancouver, B.C. based pokerpulse.com estimates, more than 190 million US dollars daily is bet in on-line poker rooms. But communication and computation are changing as the relentless application of Moore's Law brings computation and information into the quantum realm. The quantum theory of games concerns the behavior of classical games when played in the coming quantum computing environment or when played with quantum information. In almost all cases, the"quantized"versions of these games afford many new strategic options to the players. The study of so-called quantum games is quite new, arising from a seminal paper of D. Meyer \cite{Meyer} published in Physics Review Letters in 1999. The ensuing near decade has seen an explosion of contributions and controversy over what exactly a quantized game really is and if there is indeed anything new for game theory. With the settling of some of these controversies \cite{Bleiler}, it is now possible to fully analyze some of the basic endgame models from the game theory of Poker and predict with confidence just how the optimal play of Poker will change when played in the coming quantum computation environment. The analysis here shows that for certain players,"entangled"poker will allow results that outperform those available to players"in real life". <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Criticism of quantum games -a discussion <s> In the time since a merger of quantum mechanics and game theory was proposed formally in 1999, the two distinct perspectives apparent in this merger of applying quantum mechanics to game theory, referred to henceforth as the theory of"quantized games", and of applying game theory to quantum mechanics, referred to henceforth as"gaming the quantum", have become synonymous under the single ill-defined term"quantum game". Here, these two perspectives are delineated and a game-theoretically proper description of what makes a multi-player, non-cooperative game quantum mechanical, is given. Within the context of this description, finding a Nash equilibrium in a strictly competitive quantum game is shown to be equivalent to finding a solution to a simultaneous best approximation problem in the state space of quantum objects, thus setting up a framework for a game theory inspired study of"equilibrium"behavior of quantum physical systems such as those utilized in quantum information processing and computation. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Criticism of quantum games -a discussion <s> The Marinatto-Weber approach to quantum game is a straightforward way to apply the power of quantum mechanics to classical game theory. In the simplest case, the quantum scheme is that players manipulate their own qubits of a two-qubit state either with the identity operator or the Pauli operator $\sigma_{x}$. However, such a simplification of the scheme raises doubt as to whether it could really reflect a quantum game. In this paper we put forward examples which may constitute arguments against the present form of the Marinatto-Weber scheme. Next, we modify the scheme to eliminate the undesirable properties of the protocol by extending the players' strategy sets. <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Criticism of quantum games -a discussion <s> Quantization has become a new way to investigate classical game theory since quantum strategies and quantum games were proposed. In the existing studies, many typical game models, such as the prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been extensively explored by using quantization approach. Along a similar method, here several game models of opinion formations will be quantized on the basis of the Marinatto-Weber quantum game scheme, a frequently used scheme of converting classical games to quantum versions. Our results show that the quantization can fascinatingly change the properties of some classical opinion formation game models so as to generate win-win outcomes. <s> BIB005 </s> Quantum games: a review of the history, current state, and interpretation <s> Criticism of quantum games -a discussion <s> We point out a flaw in the unfair case of the quantum Prisoner's Dilemma as introduced in the pioneering Letter "Quantum Games and Quantum Strategies" of Eisert, Wilkens and Lewenstein. It is not true that the so-called miracle move therein always gives quantum Alice a large reward against classical Bob and outperforms tit-for-tat in an iterated game. Indeed, we introduce a new classical strategy that becomes Bob's dominant strategy, should Alice play the miracle move. Finally, we briefly survey the subsequent literature and turn to the 3-parameter strategic space instead of the 2-parameter one of Eisert et al. <s> BIB006
Criticism of quantum games has historically been focused on the EWL quantization protocol and we will discuss this criticism in detail below. On the other hand, we conjecture that the reason that Meyer's work has not seen much criticism is its mathematically formal foundation, and we also note that the MW quantization protocol has not been subjected to the same level of scrutiny as the EWL protocol vis-a-vis quantum physical implementation of games. This is remarkable because the MW protocol does not restrict to the original classical two player game (Prisoners' Dilemma, for example) coded into the Hilbert space via identification with the elements of a fixed basis. Therefore, the MW protocol holds little game-theoretic meaning! Frackiewicz has produced an adaptation of the MW protocol in BIB004 which attempts to rectify this protocol's deficiencies. Nonetheless, the original, game-theoretically questionable version of MW still appears in quantum game theory papers, for example BIB005 . Although the MW quantization protocol lacks game-theoretic substance in its original form, it is amenable to interpretation as an example of applying non-cooperative game theory to quantum mechanics, or "gaming the quantum" BIB003 . In this interpretation, the MW protocol represents a game-theoretic approach to designing quantum computational mechanisms which exhibit optimal behavior under multiple constraints. This makes the MW protocol more akin to Meyer's approach of using game theory to gain insights into quantum algorithms for quantum computation and communication. We will discuss this idea further in section 5. The remainder of this section is devoted to a discussion of the EWL quantization protocol and its criticism. In BIB001 , van Enk et al. state that the output of the EWL protocol for a specific and finite set of quantum strategies, after measurement, produces a function that is an extension of Prisoner's Dilemma but is entirely non-quantum mechanical. These authors argue that since this post measurement function emulates the results of the EWL quantization protocol, the quantum nature of the latter is redundant. However, if this criticism is taken seriously, then extensions of Prisoner's Dilemma via probability distributions can also be restricted to specific, finite mixed strategy sets to produce a larger game that is entirely non-probabilistic and which has a different structure than the original game! The source of this criticism appears to be a confusion between descriptive and prescriptive interpretations of game theory. For the mixed game should not be understood as a description of a game that utilizes piece-wise larger, non-probabilistic games. Rather, the reasoning behind extending to a mixed game is prescriptive, allowing one to design a mechanism that identifies probability distributions over the players' mixed strategies, which when mapped to probability distributions over the outcomes of the game via the product function, produce an expected outcome of the game as Nash equilibrium. From this point of view, the EWL protocol is a perfectly valid higher order mechanism for extending Prisoner's Dilemma. Another criticism by van Enk et al. of the EWL quantization protocol is that it does not preserve the non-cooperative nature of Prisoner's Dilemma due to the presence of quantum entanglement generated correlations. Eisert et al. have argued that entanglement can be viewed as an honest referee who communicates to the players on their behalf. But van Enck et al. insist that introducing communication between the players "...blurs the contrast between cooperative and non-cooperative games". This is true, but classical game theory also has a long and successful history of blurring this distinction through the use of mediated communication. Bringing in an honest referee into a game is just another form of game extension known as mediated communication which, to be fair, can easily be mistaken as a form of cooperation. In fact however, such games are non-cooperative and Nash equilibrium still holds as the suitable solution concept. It is only when one tries to relate the Nash equilibrium in the extended game (with mediated communication) to a notion of equilibrium in the original game that the broader notion of correlated equilibrium arises. The motivation for introducing mediated communication in games comes from the desire to realize probability distributions over the outcomes of a game which are not in the image of the mixed game. From this perspective, the EWL protocol could be interpreted as a higher order extension of Prisoner's Dilemma to include quantum mediated communication. An excellent, mathematically formal explanation of the latter interpretation can be found in BIB002 . Finally, in BIB006 , Benjamin et al. argue that Nash equilibrium in the EWL protocol, while gametheoretically correct, is of limited quantum physical interest. These authors proceed to show that when a naturally more general and quantum physically more meaningful implementation of EWL is considered, the quantum Prisoner's Dilemma has no Nash equilibrium! However, once randomization via probability distributions is introduced into their quantum Prisoner's Dilemma, a Nash equilibrium that is near-optimal, but still better paying than the one available in the classical game, materializes again in line with GlicksbergMeyer theorem. This criticism was in fact addressed by Eisert and Wilkens in a follow up publication . Benjamin et al. give a discrete set of strategies that could be employed in a classical set-up of the game that gives the same solution to the dilemma. Eisert et al's strategy set is then just the continuous analogue of this discrete set. One may contextualize the criticism of quantum computational implementation of games using the more formal language of computational complexity theory as follows. The class BQP is that of problems that can be efficiently solved on a quantum computer, and P is the class that can be solved efficiently on a classical computer. It is known that P is strictly contained in BQP and, moreover, that BQP is not contained in P. That is to say, there exist problems that quantum computers can solve efficiently but which classical computers cannot. Examples include quantum simulation, quantum search (in the Oracle setting), and solving systems of linear equations. However, it is currently unknown how BQP compares to BPP, the class of problems that can be solved on a probabilistic classical computer. The latter ambiguity calls into question whether efficient classical methods may exists for some quantum algorithms, such as the famous Shor's factoring algorithm. In particular, criticism of the EWL protocol may now be succinctly phrased as follows: it has been shown previously by Eisert et al. that some quantum games, such as Prisoners Dilemma, may provide better paying Nash equilibria when entanglement alone is added to the players strategies. However, van Enk and Pike have noted that permitting classical advice within those same games can recover similar Nash equilibria. This raises the question as to whether the power of quantum computational protocols for games using entanglement are captured completely by classical sprotocols using advice. This open question is akin to the question is quantum computing as to whether BQP is contained in BPP. We end this section by noting that the EWL protocol is just one of many possible quantization protocols that may be constructed via quantum circuit synthesis methods such as the one envisioned in Figure 4 . Consequences of these other quantum computational implementations of non-cooperative games used in economics, evolutionary biology, and any other subject where game theory is applicable, appear to be largely unexplored.
Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> Abstract : Kakutani's Fixed Point Theorem states that in Euclidean n-space a closed point to (non-void) convex set map of a convex compact set into itself has a fixed point. Kakutani showed that this implied the minimax theorem for finite games. The object of this note is to point out that Kakutani's theorem may be extended to convex linear topological spaces, and implies the minimax theorem for continuous games with continuous payoff as well as the existence of Nash equilibrium points. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> We propose a complexity model of quantum circuits analogous to the standard (acyclic) Boolean circuit model. It is shown that any function computable in polynomial time by a quantum Turing machine has a polynomial-size quantum circuit. This result also enables us to construct a universal quantum computer which can simulate, with a polynomial factor slowdown, a broader class of quantum machines than that considered by E. Bernstein and U. Vazirani (1993), thus answering an open question raised by them. We also develop a theory of quantum communication complexity, and use it as a tool to prove that the majority function does not have a linear-size quantum formula.<<ETX>> <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> Recently the concept of quantum information has been introduced into game theory. Here we present the first study of quantum games with more than two players. We discover that such games can possess an alternative form of equilibrium strategy, one which has no analog either in traditional games or even in two-player quantum games. In these ``coherent'' equilibria, entanglement shared among multiple players enables different kinds of cooperative behavior: indeed it can act as a contract, in the sense that it prevents players from successfully betraying one another. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> Uncondit ionally secure bit commi tmen t and coin flipping are known to be impossible in the classical world. Bit commitment is known to be impossible also in the quan tum world. We introduce a related new primitive quantum bit escrow. In this primitive Alice commits to a bit b to Bob. The commi tment is bindingin the sense tha t if Alice is asked to reveal the bit, Alice can not bias her commi tmen t wi thout having a good probability of being detected cheating. The commitment is sealing in the sense tha t if Bob learns information about the encoded bit, then if later on he is asked to prove he was playing honestly, he is detected cheating with a good probability. Rigorously proving the correctness of quan tum cryptographic protocols has proved to be a difficult task. We develop techniques to prove quant i ta t ive s ta tements about the binding and sealing propert ies of the quan tum bit escrow protocol. A related primitive we construct is a quan tum biased coin flipping protocol where no player can control the game, i.e., even an all-powerful cheating player must lose with some constant probability, which stands in sharp contrast to the classical world where such protocols are impossible. *This research was supported in part by a U.C. president 's postdoctoral fellowship and NSF Grant CCR-9800024. *Supported in part by David Zuckerman's David and Lucile Packard Fellowship for Science and Engineering and NSF NYI Grant No. CCR-9457799. *This research was supported in part by NSF Grant CCR9800024, and a 3SEP grant. §This research was supported in part by D A R P A and NSF under CCR-9627819, by NSF under CCR-9820855, and by a Visiting Professorship sponsored by the Research Miller Inst i tute at Berkeley. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the lull citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. STOC 2000 Portland Oregon USA Copyright ACM 2000 1-58113-184-4/00/5...$5.00 General Terms Quan tum cryptography, bit commi tmen t Quan tum coin tossing, Quan tum <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> The quantum advantage arising in a simplified multiplayer quantum game is found to be a disadvantage when the game's qubit source is corrupted by a noisy ``demon.'' Above a critical value of the corruption rate, or noise level, the coherent quantum effects impede the players to such an extent that the ``optimal'' choice of game changes from quantum to classical. <s> BIB005 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> Abstract Evolutionarily stable strategy (ESS) in classical game theory is a refinement of Nash equilibrium concept. We investigate the consequences when a small group of mutants using quantum strategies try to invade a classical ESS in a population engaged in symmetric bimatrix game of prisoner's dilemma. Secondly we show that in an asymmetric quantum game between two players an ESS pair can be made to appear or disappear by resorting to entangled or unentangled initial states used to play the game even when the strategy pair remains a Nash equilibrium in both forms of the game. <s> BIB006 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> A version of the Monty Hall problem is presented where the players are permitted to select quantum strategies. If the initial state involves no entanglement the Nash equilibrium in the quantum game offers the players nothing more than that obtained with a classical mixed strategy. However, if the initial state involves entanglement of the qutrits of the two players, it is advantageous for one player to have access to a quantum strategy while the other does not. Where both players have access to quantum strategies there is no Nash equilibrium in pure strategies, however, there is a Nash equilibrium in quantum mixed strategies that gives the same average payoff as the classical game. <s> BIB007 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> We consider a slightly modified version of the Rock-Scissors-Paper (RSP) game from the point of view of evolutionary stability. In its classical version the game has a mixed Nash equilibrium (NE) not stable against mutants. We find a quantized version of the RSP game for which the classical mixed NE becomes stable. <s> BIB008 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> We present a solution to an old problem in distributed computing. In its simplest form, a sender has to broadcast some information to two receivers, but they have access only to pairwise communication channels. Unlike quantum key distribution, here the goal is not secrecy but agreement, and the adversary (one of the receivers or the sender himself) is not outside but inside the game. Using only classical channels this problem is provably impossible. The solution uses pairwise quantum channels and entangled qutrits. <s> BIB009 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> In a recent paper [D. A. Meyer, Phys. Rev. Lett. 82, 1052 (1999)], it has been shown that a classical zero-sum strategic game can become a winning quantum game for the player with a quantum device. Nevertheless, it is well known that quantum systems easily decohere in noisy environments. In this paper, we show that if the handicapped player with classical means can delay his action for a sufficiently long time, the quantum version reverts to the classical zero-sum game under decoherence. <s> BIB010 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> Recent development in quantum computation and quantum information theory allows to extend the scope of game theory for the quantum world. The paper is devoted to the analysis of interference of quantum strategies in quantum market games. <s> BIB011 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> A new approach to play games quantum mechanically is proposed. We consider two players who perform measurements in an EPR-type setting. The payoff relations are defined as functions of correlations, i.e. without reference to classical or quantum mechanics. Classical bi-matrix games are reproduced if the input states are classical and perfectly anti-correlated, that is, for a classical correlation game. However, for a quantum correlation game, with an entangled singlet state as input, qualitatively different solutions are obtained. For example, the Prisoners' Dilemma acquires a Nash equilibrium if both players apply a mixed strategy. It appears to be conceptually impossible to reproduce the properties of quantum correlation games within the framework of classical games. <s> BIB012 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> This paper investigates the powers and limitations of quantum entanglement in the context of cooperative games of incomplete information. We give several examples of such nonlocal games where strategies that make use of entanglement outperform all possible classical strategies. One implication of these examples is that entanglement can profoundly affect the soundness property of two-prover interactive proof systems. We then establish limits on the probability with which strategies making use of entanglement can win restricted types of nonlocal games. These upper bounds may be regarded as generalizations of Tsirelson-type inequalities, which place bounds on the extent to which quantum information can allow for the violation of Bell inequalities. We also investigate the amount of entanglement required by optimal and nearly optimal quantum strategies for some games. <s> BIB013 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> This paper studies quantum Arthur-Merlin games, which are a restricted form of quantum interactive proof system in which the verifier's messages are given by unbiased coin-flips. The following results are proved. For one-message quantum Arthur-Merlin games, which correspond to the complexity class QMA, completeness and soundness errors can be reduced exponentially without increasing the length of Merlin's message. Previous constructions for reducing error required a polynomial increase in the length of Merlin's message. Applications of this fact include a proof that logarithmic length quantum certificates yield no increase in power over BQP and a simple proof that QMA /spl sube/ PP. In the case of three or more messages, quantum Arthur-Merlin games are equivalent in power to ordinary quantum interactive proof systems. In fact, for any language having a quantum interactive proof system there exists a three-message quantum Arthur-Merlin game in which Arthur's only message consists of just a single coin-flip that achieves perfect completeness and soundness error exponentially close to 1/2. Any language having a two-message quantum Arthur-Merlin game is contained in BP /spl middot/ PP. This gives some suggestion that three messages are stronger than two in the quantum Arthur-Merlin setting. <s> BIB014 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> We establish the first hardness results for the problem of computing the value of one-round games played by a verifier and a team of provers who can share quantum entanglement. In particular, we show that it is NP-hard to approximate within an inverse polynomial the value of a one-round game with (i) quantum verifier and two entangled provers or (ii) classical verifier and three entangled provers. Previously it was not even known if computing the value exactly is NP-hard. We also describe a mathematical conjecture, which, if true, would imply hardness of approximation to within a constant. We start our proof by describing two ways to modify classical multi-prover games to make them resistant to entangled provers. We then show that a strategy for the modified game that uses entanglement can be ``rounded'' to one that does not. The results then follow from classical inapproximability bounds. Our work implies that, unless P=NP, the values of entangled-prover games cannot be computed by semidefinite programs that are polynomial in the size of the verifier's system, a method that has been successful for more restricted quantum games. <s> BIB015 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> This paper presents an overview and survey of a new type of game-theoretic setting based on ideas emanating from quantum computing. (We provide a brief overview of quantum computing at the beginning of the paper.) Initial results suggest this view brings more flexibility and possibilities into decisions involving game-theoretic considerations. Applications cover a broad spectrum of classical games as well as games in economics, finance and other areas. <s> BIB016 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum games: cooperative versus non-cooperative <s> The generation and control of quantum states of light constitute fundamental tasks in cavity quantum electrodynamics (QED). The superconducting realization of cavity QED, circuit QED, enables on-chip microwave photonics, where superconducting qubits control and measure individual photon states. A long-standing issue in cavity QED is the coherent transfer of photons between two or more resonators. Here, we use circuit QED to implement a three-resonator architecture on a single chip, where the resonators are interconnected by two superconducting phase qubits. We use this circuit to shuffle one- and two-photon Fock states between the three resonators, and demonstrate qubit-mediated vacuum Rabi swaps between two resonators. This illustrates the potential for using multi-resonator circuits as photon quantum registries and for creating multipartite entanglement between delocalized bosonic modes. <s> BIB017
In the almost two decades since the inception of the theory of quantum games, the EWL quantization protocol has taken on the role of a working definition for non-cooperative quantum games for physicists. Several notable results on the quantum physical implementation of games followed Eiset et al.'s work, such as BIB003 BIB005 BIB006 BIB007 BIB008 BIB012 BIB010 . This may seem odd as one would think that the physics community would be more interested in the equilibrium or optimal behavior of quantum systems than the quantum physical implementation of games. On the other hand, this scenario makes sense from a practical point of view because with the advent of technological realizations of quantum computers and quantum communication systems, the playability of games quantum computationally would be of fundamental importance for financial and economic decision making. There is a considerable body of work in which the authors claim to cooperatively game the quantum. Several authors in the early 2000's, such as BIB013 BIB009 BIB015 BIB004 BIB014 , gamed quantum communication protocols or studied complexity classes for quantum processes by considering the protocols as cooperative games of incomplete information. While most authors of such work have mainly focused on identifying quantum advantages similar to the one Meyer identified in his paper, the source of motivation for their work is different. For example, in , the authors state: "Formally, a quantum coin flipping protocol with bias is a two-party communication game in the style of BIB001 ..." where the citation provided is to Chi-Chi Yao's paper titled Quantum circuit complexity BIB002 . Despite the fact that cooperative game theory is the motivation for the latter and other similar work, a formal discussion of cooperative games together with a formal mapping of the relevant physics to the requisite cooperative game is almost always missing. In fact, it would be accurate to say the the word "game" is thrown around in this body of literature as frivolously as the headless carcass of a goat is thrown around in the Afghan game of Buzkashi; but the beef is nowhere to be found. This is not surprising since beef comes from cows! The point of this somewhat macabre analogy is that one should be just as disturbed when hearing the word "game" used for an object that isn't one, as one surely is when hearing the word "beef" used for a goat carcass. Cooperative games are sophisticated conceptual and mathematical constructs. Quoting Aumann [50] (the quote is taken from ) Cooperative (game) theory starts with a formalization of games that abstracts away altogether from procedures and . . . concentrates, instead, on the possibilities for agreement . . . There are several reasons that explain why cooperative games came to be treated separately. One is that when one does build negotiation and enforcement procedures explicitly into the model, then the results of a non-cooperative analysis depend very strongly on the precise form of the procedures, on the order of making offers and counter-offers and so on. This may be appropriate in voting situations in which precise rules of parliamentary order prevail, where a good strategist can indeed carry the day. But problems of negotiation are usually more amorphous; it is difficult to pin down just what the procedures are. More fundamentally, there is a feeling that procedures are not really all that relevant; that it is the possibilities for coalition forming, promising and threatening that are decisive, rather than whose turn it is to speak. . . . Detail distracts attention from essentials. Some things are seen better from a distance; the Roman camps around Metzada are indiscernible when one is in them, but easily visible from the top of the mountain. More formally, a cooperative game allows players to benefit by forming coalitions, and binding agreements are possible. This means that a formal definition of a cooperative game is different than that of a non-cooperative one, and instead of Nash equilibrium, cooperative games entertain solution concepts such as a coalition structure consisting of various coalitions of players, together with a payoff vector for the various coalitions. Optimality features of the solution concepts are different than those in non-cooperative games as well. For instance, there is the notion of an imputation which is a coalition structure in which every player in a coalition prefers to stay in the coalition over "going at it alone". While aspects of cooperative games are certainly reminiscent of those of non-cooperative games, the two types of games are very different objects in very different categories. Because the body of literature that purports to utilize cooperative games to identify some form of efficient or optimal solutions in quantum information processes does so via unclear, indirect, and informal analogies, one could argue that it remains unclear as to what the merit, game-theoretic or quantum physical, of such work is. What is needed in this context is a formal study of quantum games rooted in the formalism of cooperative game theory. Another interesting situation can be observed during the early years of quantum game theory (2002 to be exact) when Piotrowski proposed a quantum physical model for markets and economics which are viewed as games. His ideas appear to be inspired by Meyer's work. In fact, Eisert et al.'s paper does not even appear as a reference in this paper. In a later paper BIB011 however, Piotrowski states "We are interested in quantum description of a game which consists of buying or selling of market good" (the emphasis is our addition). Note that from the terminology used in both of Piotrowski's papers, it seems that the author wants to implement games quantum physically, even though his initial motivation comes from gaming the quantum! This goes to show that in the early years of quantum game theory, the motivation for merging aspects of quantum physics and game theory was certainly not clear cut. Finally, there are offenders in the quantum physics community who use the word "game" colloquially and not in a game-theoretically meaningful way. An example can be found in BIB017 . Such vacuous usage of the word "game" further confuses and obfuscates serious studies in quantized games and gaming the quantum, cooperative or not. Literature on quantum games is considerable. A good source of reference is the Google Scholar page on the subject which contains a wealth of information on past and recent publications in the area. The survey paper by Guo et al. BIB016 is an excellent precursor to our efforts here.
Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> In a wide class of social systems each agent has a range of actions among which he selects one. His choice is not, however, entirely free and the actions of all the other agents determine the subset to which his selection is restricted. Once the action of every agent is given, the outcome of the social activity is known. The preferences of each agent yield his complete ordering of the outcomes and each one of them tries by choosing his action in his restricting subset to bring about the best outcome according to his own preferences. The existence theorem presented here gives general conditions under which there is for such a social system an equilibrium, i.e., a situation where the action of every agent belongs to his restricting subset and no agent has incentive to choose another action. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> An open system can be modeled as a two-player game between the system and its environment. At each round of the game, player 1 (the system) and player 2 (the environment) independently and simultaneously choose moves, and the two choices determine the next state of the game. Properties of open systems can be modeled as objectives of these two-player games. For the basic objective of reachability-can player 1 force the game to a given set of target states?-there are three types of winning states, according to the degree of certainty with which player 1 can reach the target. From type-1 states, player 1 has a deterministic strategy to always reach the target. From type-2 states, player 1 has a randomized strategy to reach the target with probability 1. From type-3 states, player 1 has for every real /spl epsi/>0 a randomized strategy to reach the target with probability greater than 1-/spl epsi/. We show that for finite state spaces, all three sets of winning states can be computed in polynomial time: type-1 states in linear time, and type-2 and type-3 states in quadratic time. The algorithms to compute the three sets of winning states also enable the construction of the winning and spoiling strategies. Finally, we apply our results by introducing a temporal logic in which all three kinds of winning conditions can be specified, and which can be model checked in polynomial time. This logic, called Randomized ATL, is suitable for reasoning about randomized behavior in open (two-agent) as well as multi-agent systems. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> Effects of quantum and classical correlations on game theory are studied to clarify the new aspects brought into game theory by the quantum mechanical toolbox. In this study, we compare quantum correlation represented by a maximally entangled state and classical correlation that is generated through phase damping processes on the maximally entangled state. Thus, this also sheds light on the behavior of games under the influence of noisy sources. It is observed that the quantum correlation can always resolve the dilemmas in non-zero sum games and attain the maximum sum of both players' payoffs, while the classical correlation cannot necessarily resolve the dilemmas. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> We introduce a novel game that models the creation of Internet-like networks by selfish node-agents without central design or coordination. Nodes pay for the links that they establish, and benefit from short paths to all destinations. We study the Nash equilibria of this game, and prove results suggesting that the "price of anarchy" [4] in this context (the relative cost of the lack of coordination) may be modest. Several interesting: extensions are suggested. <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> Ken Binmore's previous game theory textbook, Fun and Games (D.C. Heath, 1991), carved out a significant niche in the advanced undergraduate market; it was intellectually serious and more up-to-date than its competitors, but also accessibly written. Its central thesis was that game theory allows us to understand many kinds of interactions between people, a point that Binmore amply demonstrated through a rich range of examples and applications. This replacement for the now out-of-date 1991 textbook retains the entertaining examples, but changes the organization to match how game theory courses are actually taught, making Playing for Real a more versatile text that almost all possible course designs will find easier to use, with less jumping about than before. In addition, the problem sections, already used as a reference by many teachers, have become even more clever and varied, without becoming too technical. Playing for Real will sell into advanced undergraduate courses in game theory, primarily those in economics, but also courses in the social sciences, and serve as a reference for economists. <s> BIB005 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> We study Nash equilibria in the setting of network creation games introduced recently by Fabrikant, Luthra, Maneva, Papadimitriou and Shenker. In this game we have a set of selfish node players, each creating some incident links, and the goal is to minimize α times the cost of the created links plus sum of the distances to all other players. Fabrikant et al. proved an upper bound O(√α) on the price of anarchy, i.e., the relative cost of the lack of coordination. Albers, Eilts, Even-Dar, Mansour, and Roditty show that the price of anarchy is constant for α = O(√n) and for α ≥ 12n[lg n], and that the price of anarchy is 15(1+min {α2 n, n2 α})1/3) for any α. The latter bound shows the first sublinear worst-case bound, O(n1/3), for all α. But no better bound is known for α between ω(√n) and o(n lg n). Yet α ≈ n is perhaps the most interesting range, for it corresponds to considering the average distance (instead ofthe sum of distances) to other nodes to be roughly on par with link creation (effectively dividing α by n). In this paper, we prove the first o(ne) upper bound for general α, namely 2O(√lg n). We also prove aconstant upper bound for α = O(n1-e) for any fixed e > 0, substantially reducing the range of α for which constant bounds have not been obtained. Along the way, we also improve the constant upper bound by Albers et al. (with the leadconstant of 15 ) to 6 for α Next we consider the bilateral network variant of Corbo and Parkesin which links can be created only with the consent of both end points and the link price is shared equally by the two. Corbo and Parkes show an upper bound of O(√α) and a lower bound of Ω(lg α) for α ≤ n. In this paper, we show that in fact the upper bound O(√α) is tight for α ≤, by proving a matching lower bound of Ω(√α). For α > n, we prove that the price of anarchy is Θ(n/√ α). Finally we introduce a variant of both network creation games, in which each player desires to minimize α times the cost of its created links plus the maximum distance (instead of the sum of distances) to the other players. This variant of the problem is naturally motivated by considering the worst case instead of the average case. Interestingly, for the original (unilateral) game, we show that the price of anarchy is at most 2 for α ≥ n, O(min{4√lg n, (n/α)1/3}) for 2√lgn ≤ α ≤ n, and O(n2/α) for α α+1) for α ≤ n, and an upper bound of 2 for α > n. <s> BIB006 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> Quantum key distribution uses quantum mechanics to guarantee secure communication. BB84 is a widely used quantum key distribution that provides a way for two parties, a sender, Alice, and a receiver, Bob, to share an unconditionally secure key in the presence of an eavesdropper, Eve. In a new approach, we view this protocol as a three player static game in which Alice and Bob are two cooperative players and Eve is a competitive one. In our game model Alice’s and Bob’s objective is to maximize the probability of detecting Eve, while Eve’s objective is to minimize this probability. Using this model we show how game theory can be used to find the strategies for Alice, Bob and Eve. <s> BIB007 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> In the time since a merger of quantum mechanics and game theory was proposed formally in 1999, the two distinct perspectives apparent in this merger of applying quantum mechanics to game theory, referred to henceforth as the theory of"quantized games", and of applying game theory to quantum mechanics, referred to henceforth as"gaming the quantum", have become synonymous under the single ill-defined term"quantum game". Here, these two perspectives are delineated and a game-theoretically proper description of what makes a multi-player, non-cooperative game quantum mechanical, is given. Within the context of this description, finding a Nash equilibrium in a strictly competitive quantum game is shown to be equivalent to finding a solution to a simultaneous best approximation problem in the state space of quantum objects, thus setting up a framework for a game theory inspired study of"equilibrium"behavior of quantum physical systems such as those utilized in quantum information processing and computation. <s> BIB008 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> Two qubit quantum computations are viewed as two player, strictly competitive games and a game-theoretic measure of optimality of these computations is developed. To this end, the geometry of Hilbert space of quantum computations is used to establish the equivalence of game-theoretic solution concepts of Nash equilibrium and mini-max outcomes in games of this type, and quantum mechanisms are designed for realizing these mini-max outcomes. <s> BIB009 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> Game theory and its quantum extension apply in numerous fields that affect people’s social, political, and economical life. Physical limits imposed by the current technology used in computing architectures (e.g., circuit size) give rise to the need for novel mechanisms, such as quantum inspired computation. Elements from quantum computation and mechanics combined with game-theoretic aspects of computing could open new pathways towards the future technological era. This paper associates dominant strategies of repeated quantum games with quantum automata that recognize infinite periodic inputs. As a reference, we used the PQ-PENNY quantum game where the quantum strategy outplays the choice of pure or mixed strategy with probability 1 and therefore the associated quantum automaton accepts with probability 1. We also propose a novel game played on the evolution of an automaton, where players’ actions and strategies are also associated with periodic quantum automata. <s> BIB010 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> In a seminal paper, Meyer (Phys Rev Lett 82:1052, 1999) described the advantages of quantum game theory by looking at the classical penny flip game. A player using a quantum strategy can win against a classical player almost 100 % of the time. Here we make a slight modification to the quantum game, with the two players sharing an entangled state to begin with. We then analyze two different scenarios: First in which quantum player makes unitary transformations to his qubit, while the classical player uses a pure strategy of either flipping or not flipping the state of his qubit. In this case, the quantum player always wins against the classical player. In the second scenario, we have the quantum player making similar unitary transformations, while the classical player makes use of a mixed strategy wherein he either flips or not with some probability "p." We show that in the second scenario, 100 % win record of a quantum player is drastically reduced and for a particular probability "p" the classical player can even win against the quantum player. This is of possible relevance to the field of quantum computation as we show that in this quantum game of preserving versus destroying entanglement a particular classical algorithm can beat the quantum algorithm. <s> BIB011 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> We propose a quantum voting system, in the spirit of quantum games such as the quantum Prisoner's Dilemma. Our scheme enables a constitution to violate a quantum analog of Arrow's Impossibility Theorem. Arrow's Theorem is a claim proved deductively in economics: Every (classical) constitution endowed with three innocuous-seeming properties is a dictatorship. We construct quantum analogs of constitutions, of the properties, and of Arrow's Theorem. A quantum version of majority rule, we show, violates this Quantum Arrow Conjecture. Our voting system allows for tactical-voting strategies reliant on entanglement, interference, and superpositions. This contribution to quantum game theory helps elucidate how quantum phenomena can be harnessed for strategic advantage. <s> BIB012 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> In this paper, we formulate and analyze generalizations of the quantum penny flip game. In the penny flip game, one coin has two states, heads or tails, and two players apply alternating operations on the coin. In the original Meyer game, the first player is allowed to use quantum (i.e., non-commutative) operations, but the second player is still only allowed to use classical (i.e., commutative) operations. In our generalized games, both players are allowed to use non-commutative operations, with the second player being partially restricted in what operators they use. We show that even if the second player is allowed to use "phase-variable" operations, which are non-Abelian in general, the first player still has winning strategies. Furthermore, we show that even when the second player is allowed to choose one from two or more elements of the group $U(2)$, the second player has winning strategies under certain conditions. These results suggest that there is often a method for restoring the quantum state disturbed by another agent. <s> BIB013 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum entanglement: Nash versus social equilibrium <s> With respect to probabilistic mixtures of the strategies in non-cooperative games, quantum game theory provides guarantee of fixed-point stability, the so-called Nash equilibrium. This permits players to choose mixed quantum strategies that prepare mixed quantum states optimally under constraints. We show here that fixed-point stability of Nash equilibrium can also be guaranteed for pure quantum strategies via an application of the Nash embedding theorem, permitting players to prepare pure quantum states optimally under constraints. <s> BIB014
Entanglement in a quantum physical system implies non-classical correlations between the observable of the system. While Eisert et al. showed that their quantum computational implementation of Prisoner's Dilemma produced non-classical correlations and resolved the dilemma (Nash equilibrium is also optimal), in BIB003 , Shimumura et al. establish a stronger result that entanglement enabled correlations always resolve dilemmas in non-zero sum games, and that classical correlations do not necessarily do the same. Quantum entanglement is clearly a resource for quantum games. In this section, we offer here a new perspective on the role of quantum entanglement in quantum games. We consider quantum entanglement in the context of Debreu's BIB001 "social equilibrium". Whereas Nash equilibrium is the solution of a non-cooperative game in which each player's strategy set is independent of all other players' strategy sets, social equilibrium occurs in a generalization (not extension) of noncooperative games where the players' strategy sets are not independent of each other. These generalized games are also are known as abstract economies. Take for instance the example of a supermarket shopper (this example is paraphrased from ) interested in buying the basket of goods that is best for her family. While in theory she can choose any basket she pleases, realistically, she must stay within her budget which is not independent of the actions of other players in the economy. For instance, her budget will depend on what her employer pays her in wages. Furthermore, given her budget, which baskets are affordable will depend on the price of the various commodities, which, in turn, are determined by supply and demand in the economy. In an abstract economy, a player is restricted to playing strategies from a subset of his strategy set, with this limitation being a function of the strategy choices of all other players. More formally, in an abstract economy with n players, let S i be the i th player's strategy set and let s −i represent the (n − 1)-tuple of strategy choices of the other (n − 1) players. Then player i is restricted to play feasible strategies only from some γ i (s −i ) ⊆ S i where γ i is the "restriction" function. In social equilibrium, each player employs a feasible strategy, and no player can gain by unilaterally deviating to some other feasible strategy. Debreu gives a guarantee of the existence of a social equilibrium in BIB001 using an argument that also utilizes Kakutani's fixed point theorem. Note that an abstract economy may be viewed as a type of mediated communication where the communication occurs via interaction with the social environment. Recall that the EWL quantization of Prisoner's Dilemma utilizes maximally entangled qubits. Rather than as a quantum mechanism for mediated communication, we interpret the entanglement between qubits as a restriction function, restricting the players' strategy sets to feasible strategy subsets which Eisert et al. call the two-parameter strategy sets. It is exactly in this restricted strategy set that the existence of a dilemma-breaking, optimal Nash equilibrium is established. The point of note here is that the resource that quantum entanglement affords the players in the EWL quantization (and possibly others) can be interpreted in two different ways: one, as an extension to mediated quantum communication that produces a near-optimal Nash equilibrium in the quantum game, and the other as a generalization of Prisoner's Dilemma to a quantum abstract economy with social equilibrium, where entanglement serves as the environment. In the former interpretation, Nash equilibrium is realized in mixed quantum strategies; in the latter interpretation, social equilibrium is realized via pure quantum strategies. Whereas the Nash equilibrium is guaranteed by Glicksberg's fixed point theorem, the question of a guarantee of social equilibrium in pure strategy quantum games is raised here for the first time. We conjecture that the answer would be in the affirmative, and that it will most likely be found using Nash embedding of CP n into R m similar to the one appearing in BIB014 . Finally, the interpretation of quantum entanglement as a restriction function also addresses van Enck et al's criticism of the EWL quantization as blurring the distinction between cooperative and non-cooperative games. games from a decision theory point of view, with examples from biology, economics, gambling, and has connections to quantum algorithms and protocols. A more recent work gives a secure, remote, twoparty game that can be played using a quantum gambling machine. This quantum communication protocol has the property that one can make the game played on it demonstrably fair to both parties by modifying the Nash equilibrium point. Some other recent efforts to game quantum computations as non-cooperative games at Nash equilibrium appear in BIB008 BIB009 and are reminiscent of work in concurrent reachability games BIB002 and control theory in general. In , the authors use quantum game models to design protocols that make classical communications more efficient, and in BIB007 the authors view the BB84 quantum key distribution protocol as a three player static game in which the communicative parties form a cooperative coalition against an eavesdropper. In this game model, the coalition players aim to maximize the probability of detecting the intruder, while the intruder's objective is to minimize this probability. In other words, the BB84 protocol is modeled as a two player, zero-sum game, though it is not clear whether the informal mingling of elements of cooperative and non-cooperative game theory skews the results of this paper. Further, in BIB010 , the authors associate Meyer's quantum Penny Flip game with a quantum automaton and propose a game played on the evolution of the automaton. This approach studies quantum algorithms via quantum automaton akin to how behavioral game-theorists study repeated classical games using automaton such as Tit-for-Tat, Grimm, and Tat-for-Tit etc BIB005 . Finally, some interesting studies into Meyer's Penny Flip game appear in BIB011 , where the authors consider a two player version of this game played with entangled strategies and show that a particular classical algorithm can beat the quantum algorithm, and in BIB013 , where the author formulates and analyzes generalizations of the quantum Penny Flip game to include non-commutative quantum strategies. The main results of this work is that there is sometimes a method for restoring the quantum state disturbed by another player, a result of strong relevance to the problem of quantum error-correction and fault-tolerance in quantum computing and quantum algorithm design. Finally, motivated by quantum game theory, in BIB012 Bao et al. propose a quantum voting system that has an algorithmic flavor to it. Their quantized voting scheme enables a constitution to violate a quantum analog of Arrow's impossibility theorem, which states that every classical constitution endowed with three apparently innocuous properties is in fact a dictatorship. Thier voting system allows for tactical-voting strategies reliant on entanglement, interference, and superpositions and shows how an algorithmic approach to quantum games leads to strategic advantage. Despite the excellent efforts of several authors in the preceding discussion, the approach of Meyer to search for quantum advantage as a Nash equilibrium in quantum games remains largely unexplored. In terms of quantum communication protocols, where quantum processes are assumed to be noisy and are therefore modeled as density matrices or mixed quantum states, the Meyer-Glicksberg theorem offers a guarantee of Nash equilibrium. Whereas a similar guarantee in mixed classical states spurred massive research in classical computer science, economics and political science, the same is not true in the case of, in the least, quantum communication protocols. Likewise for pure quantum states. These states are used to model the pristine and noiseless world of quantum computations and quantum algorithms. One set of quantum computational protocols is the MW quantum game (not the MW game quantization protocol) which we interpreted above as having a Nash equilibrium in pure quantum states. Note that one could in principle say the same about the EWL quantum game, despite its questionable quantum physical reputation. However, unlike the situation with mixed quantum states, a guarantee of a Nash equilibrium has only come to light very recently BIB014 . On the other hand, some efforts have been made in bringing together ideas from network theory, in particular network creation games BIB004 BIB006 , and quantum algorithms in the context of the quantum Internet and compilation of adiabatic quantum programs [73] .
Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> In the conventional approach to quantum mechanics, indeterminism is an axiom and nonlocality is a theorem. We consider inverting the logical order, making nonlocality an axiom and indeterminism a theorem. Nonlocal “superquantum” correlations, preserving relativistic causality, can violate the CHSH inequality more strongly than any quantum correlations. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> We consider two aspects of quantum game theory: the extent to which the quantum solution solves the original classical game, and to what extent the new solution can be obtained in a classical model. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> Correlated equilibria are sometimes more efficient than the Nash equilibria of a game without signals. We investigate whether the availability of quantum signals in the context of a classical strategic game may allow the players to achieve even better efficiency than in any correlated equilibrium with classical signals, and find the answer to be positive. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> This paper investigates the powers and limitations of quantum entanglement in the context of cooperative games of incomplete information. We give several examples of such nonlocal games where strategies that make use of entanglement outperform all possible classical strategies. One implication of these examples is that entanglement can profoundly affect the soundness property of two-prover interactive proof systems. We then establish limits on the probability with which strategies making use of entanglement can win restricted types of nonlocal games. These upper bounds may be regarded as generalizations of Tsirelson-type inequalities, which place bounds on the extent to which quantum information can allow for the violation of Bell inequalities. We also investigate the amount of entanglement required by optimal and nearly optimal quantum strategies for some games. <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> Games with incomplete information are formulated in a multi-sector probability matrix formalism that can cope with quantum as well as classical strategies. An analysis of classical and quantum strategy in a multi-sector extension of the game of Battle of Sexes clarifies the two distinct roles of nonlocal strategies, and establish the direct link between the true quantum gain of game's payoff and the breaking of Bell inequalities. <s> BIB005 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> We show that, for a continuous set of entangled four-partite states, the task of maximizing the payoff in the symmetric-strategy four-player quantum Minority game is equivalent to maximizing the violation of a four-particle Bell inequality. We conclude the existence of direct correspondences between (i) the payoff rule and Bell inequalities, and (ii) the strategy and the choice of measured observables in evaluating these Bell inequalities. We also show that such a correspondence is unique to minority-like games. <s> BIB006 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> We propose a simple yet rich model to extend the notions of Nash equilibria and correlated equilibria of strategic games to the quantum setting, in which we then study the relations between classical and quantum equilibria. Unlike the previous work that focus on qualitative questions on specific games of small sizes, we address the following fundamental and quantitative question for general games: How much"advantage"can playing quantum strategies provide, if any? Two measures of the advantage are studied, summarized as follows. 1. A natural measure is the increase of payoff. We consider natural mappings between classical and quantum states, and study how well those mappings preserve the equilibrium properties. Among other results, we exhibit correlated equilibrium $p$ whose quantum superposition counterpart $\sum_s \sqrt{p(s)}\ket{s}$ is far from being a quantum correlated equilibrium; actually a player can increase her payoff from almost 0 to almost 1 in a [0,1]-normalized game. We achieve this by a tensor product construction on carefully designed base cases. 2. For studying the hardness of generating correlated equilibria, we propose to examine \emph{correlation complexity}, a new complexity measure for correlation generation. We show that there are $n$-bit correlated equilibria which can be generated by only one EPR pair followed by local operation (without communication), but need at least $\log(n)$ classical shared random bits plus communication. The randomized lower bound can be improved to $n$, the best possible, assuming (even a much weaker version of) a recent conjecture in linear algebra. We believe that the correlation complexity, as a complexity-theoretical counterpart of the celebrated Bell's inequality, has independent interest in both physics and computational complexity theory and deserves more explorations. <s> BIB007 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> Nonlocality enables two parties to win specific games with probabilities strictly higher than allowed by any classical theory. Nevertheless, all known such examples consider games where the two parties have a common interest, since they jointly win or lose the game. The main question we ask here is whether the nonlocal feature of quantum mechanics can offer an advantage in a scenario where the two parties have conflicting interests. We answer this in the affirmative by presenting a simple conflicting interest game, where quantum strategies outperform classical ones. Moreover, we show that our game has a fair quantum equilibrium with higher payoffs for both players than in any fair classical equilibrium. Finally, we play the game using a commercial entangled photon source and demonstrate experimentally the quantum advantage. <s> BIB008 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> In a previous publication, we showed how group actions can be used to generate Bell inequalities. The group action yields a set of measurement probabilities whose sum is the basic element in the inequality. The sum has an upper bound if the probabilities are a result of a local, realistic theory, but this bound can be violated if the probabilities come from quantum mechanics. In our first paper, we considered the case of only two parties making the measurements and single-generator groups. Here we show that the method can be extended to three parties, and it can also be extended to non-Abelian groups. We discuss the resulting inequalities in terms of nonlocal games. <s> BIB009 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> We study team decision problems where communication is not possible, but coordination among team members can be realized via signals in a shared environment. We consider a variety of decision problems that differ in what team members know about one another's actions and knowledge. For each type of decision problem, we investigate how different assumptions on the available signals affect team performance. Specifically, we consider the cases of perfectly correlated, i.i.d., and exchangeable classical signals, as well as the case of quantum signals. We find that, whereas in perfect-recall trees (Kuhn 1950 Proc. Natl Acad. Sci. USA 36, 570-576; Kuhn 1953 In Contributions to the theory of games, vol. II (eds H Kuhn, A Tucker), pp. 193-216) no type of signal improves performance, in imperfect-recall trees quantum signals may bring an improvement. Isbell (Isbell 1957 In Contributions to the theory of games, vol. III (eds M Drescher, A Tucker, P Wolfe), pp. 79-96) proved that, in non-Kuhn trees, classical i.i.d. signals may improve performance. We show that further improvement may be possible by use of classical exchangeable or quantum signals. We include an example of the effect of quantum signals in the context of high-frequency trading. <s> BIB010 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> Drawing on ideas from game theory and quantum physics, we investigate nonlocal correlations from the point of view of equilibria in games of incomplete information. These equilibria can be classified in decreasing power as general communication equilibria, belief-invariant equilibria and correlated equilibria, all of which contain the familiar Nash equilibria. The notion of belief-invariant equilibrium has appeared in game theory before, in the 1990s. However, the class of non-signalling correlations associated to belief-invariance arose naturally already in the 1980s in the foundations of quantum mechanics. In the present work, we explain and unify these two origins of the idea and study the above classes of equilibria, and furthermore quantum correlated equilibria, using tools from quantum information but the language of game theory. We present a general framework of belief-invariant communication equilibria, which contains (quantum) correlated equilibria as special cases. Our framework also contains the theory of Bell inequalities, a question of intense interest in quantum mechanics and the original motivation for the above-mentioned studies. We then use our framework to show new results related to social welfare. Namely, we exhibit a game where belief-invariance is socially better than correlated equilibria, and one where all non-belief-invariant equilibria are socially suboptimal. Then, we show that in some cases optimal social welfare is achieved by quantum correlations, which do not need an informed mediator to be implemented. Furthermore, we illustrate potential practical applications: for instance, situations where competing companies can correlate without exposing their trade secrets, or where privacy-preserving advice reduces congestion in a network. Along the way, we highlight open questions on the interplay between quantum information, cryptography, and game theory. <s> BIB011 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> Game theory is a well established branch of mathematics whose formalism has a vast range of applications from the social sciences, biology, to economics. Motivated by quantum information science, there has been a leap in the formulation of novel game strategies that lead to new (quantum Nash) equilibrium points whereby players in some classical games are always outperformed if sharing and processing joint information ruled by the laws of quantum physics is allowed. We show that, for a bipartite non zero-sum game, input local quantum correlations, and separable states in particular, suffice to achieve an advantage over any strategy that uses classical resources, thus dispensing with quantum nonlocality, entanglement, or even discord between the players' input states. This highlights the remarkable key role played by pure quantum coherence at powering some protocols. Finally, we propose an experiment that uses separable states and basic photon interferometry to demonstrate the locally-correlated quantum advantage. <s> BIB012 </s> Quantum games: a review of the history, current state, and interpretation <s> Bell's inequalities and quantum games <s> Quantum entanglement has been recently demonstrated as a useful resource in conflicting interest games of incomplete information between two players, Alice and Bob [Pappa et al., Phys. Rev. Lett. 114, 020401 (2015)]. General setting for such games is that of correlated strategies where the correlation between competing players is established through a trusted common adviser; however, players need not reveal their input to the adviser. So far, quantum advantage in such games has been revealed in a restricted sense. Given a quantum correlated equilibrium strategy, one of the players can still receive a higher than quantum average payoff with some classically-correlated equilibrium strategy. In this work, by considering a class of asymmetric Bayesian games, we show the existence of games with quantum correlated equilibrium where average payoff of both the players exceed respective individual maximum for each player over all classically-correlated equilibriums. <s> BIB013
It is well known that the EWL quantization scheme is limited in it's applicability to any situation where the players can perform any physically possible operation. This may be applicable in some instances when the hardware used to implement a quantum game only allows a limited set of operations, and there are no players with malicious intent or the technological sophistication to perform operations outside of the allowed set. However, for more sophisticated analyses that include such actors or other factors, a more general framework is desired. To this end, one can start by focusing on the role of quantum entanglement in quantum games. As discussed earlier, the most common interpretation is that the entanglement between players' quantum strategies acts as a type of mediated communication, advice, or contract between the players. A common objection is that quantum games have more strategy choices than the classical version, and that it is possible to simply reformulate a classical game to incorporate more strategy choices so that the classical game has the same equilibria as the quantum counterpart, as was shown with the quantum Prisoner's Dilemma BIB002 . However, as is discussed below, this is not always the case. The study of Bayesian quantum games addresses these objections and has elucidated the role of entanglement in quantum games as well as the possible advantages of a quantum game by relating them to Bell's inequalities BIB013 . The connection between Bell's inequalities and Bayesian quantum games was first recognized in the similarities between the form of a Bell's inequality and the payoff function of a Bayesian quantum game. It was found that by casting a quantum Bayesian game in a certain way, the payoff function can resemble a form of a Bell's inequality so that in the presence of quantum correlations, i.e. entanglement, the inequality will be broken and the quantum game will have a higher payoff than the classical version. In the analogy, a direct parallel can be drawn between the measurements and measurement outcomes in a Bell's inequality experiment to the player types and player actions in a Bayesian quantum game . In Bayesian games, the players have incomplete information about the payoff matrix of the game. This can be formulated by assigning the players different types characterized by different payoff matrices. When making their strategy choice, the players know their own type, but not that of their opponent . This is also noted that this is related to the conditions in non-local games BIB004 , the condition that the players cannot communicate during a game, and the concept of the no-signaling in physics. A correspondence can be drawn between the condition of locality, used in deriving a Bell's inequality, and the condition that the players do not know the type of the other player. This condition can be described mathematically for a two player, two strategy game for example, by labeling the player types as X, Y and the strategy choices as x, y with the following equation BIB005 : That is, the probability of the joint actions x, y given that the player types are X and Y is equal to the probability that a player of type X plays x multiplied by the probability that a player of type Y plays y. The factorizability of the joint probability distribution is a statement that the players action cannot be influenced by the type of their opponent. It has been noted previously by Fine [78] that a sufficient condition for the breaking of a Bell's inequality is that the joint probability distribution is non-factorizable. For example, if there are two players (X and Y), with two possible strategy choices (x and y), the joint probability distribution of a mixed-strategy where both players choose each strategy with a 50% probability is given by: (|xy + |yx ), it is possible to realize the probability distribution: This probability distribution, when analyzed for an individual player still has a 50% probability of either strategy. The difference is that the strategy choices of X and Y are perfectly correlated. This probability distribution is not in the image of the original mixed strategy game and is not possible without some form of communication between the players or advice. Thus it is possible to formulate a Bell's inequality from a given Bayesian quantum game and vice versa BIB006 . The objection that the strategy choices available to a quantum player are greater than that of the classical player was addressed by Iqbal and Abbott BIB006 . They formulated a quantum Bayesian game using probability distributions rather than state manipulations. The condition of factorizability of these probability distributions produces constraints on the joint probability distributions of the players, which can in turn be formulated as Bell's inequalities. The advantage in this case is that the strategies available to the classical players are identical to those of the quantum players. The difference is that in the quantum case, the players are given an entangled input, while in the classical case they are given a product state. Within this formalism the solution to the Prisoner's Dilemma is identical in the quantum and classical case, whereas in other games, the violation of Bell's inequality can lead to a quantum advantage, as in the matching pennies game. This analysis can be taken further to incorporate the player's receiving advice in a classical game. In a classical non-local game, the players are allowed to formulate strategies before the game and may be the recipients of some form of common advice, but the advice cannot depend on the state of nature. As discussed earlier, this leads to the correlated equilibria . As we also noted in section 4, correlated equilibrium allows for the possible realization of more general probability distributions over the outcomes that may not be compatible with the player's mixed strategies. More precisely, a mixed strategy Nash equilibrium is a correlated equilibrium where the probability distribution is a product of two mixed-strategies. In quantum games, these non-factorizable probability distributions are provided by entanglement or mediated quantum communication. Brunner and Linden incorporated the correlations that can be produced from classical advice into their analysis of quantum games. In this case, the joint probability distribution can be described by: Where the variable λ represents the advice, or information distributed ot the players according to the prior ρ(λ). This type of probability model accurately describes the behavior of players under shared classical advice. This condition is precisely the condition that is used to derive a Bell's inequality, and the history of violation of Bell's inequalities shows that quantum correlations arising from entanglement can break the inequalities derived from equation 15. Thus, entanglement produces joint probability distributions of outcomes that are not possible classically, not just because they are non-factorizable, but also because they cannot have arisen from a classical source of advice, or in traditional quantum mechanical terminology, a hidden variable. If these joint probability distributions are realized in a Bayesian game with payoffs assigned appropriately, the players with access to quantum resources can play with probability distributions that are more favorable than what is possible classically. Correlated equilibria are possible in classical games because of the existence of probability distributions that are not factorizable. They therefore exhibit a wider class of probability distributions. And indeed, Bells inequalities show that there are quantum correlation that are beyond the classically available correlations. Games designed around Bells inequalities demonstrate that there are quantum games that can out-perform even classical games with correlated equilibria. These games do not have the weakness of the EWL quantization schemes, where the same results can be obtained by allowing correlated equilibria and without restricting the allowed strategies, which can often make EWL games un-physical. More recently, several researchers have used these results to generate games based on Bells inequalities that exhibit true benefit from quantum correlations without suffering from the shortcomings of earlier quantization schemes BIB003 BIB010 BIB007 BIB011 BIB008 . Thus it has been shown that the probability distributions of outcomes are more fundamental than the presence of entanglement within a game. Indeed these considerations shed light on other types of correlations that can exist, both within quantum mechanics and beyond quantum mechanics. For example, there are quantum states that exhibit quantum correlations even when the entanglement is known to be zero. These correlations are known as quantum discord, and it is possible to formulate games that have an advantage under quantum discord BIB012 . Further, there are types of correlations known to be consistent with the no-signaling condition that are not even possible with quantum mechanics, known as super-quantum correlations Popescu and Rohrlich BIB001 . Games formulated with these correlations can outperform even the quantum versions . The analysis of Bayesian quantum games has thus addressed several of the objections to the importance of quantum games. The correlations that exist, or the joint probability distributions, of mixed strategies are shown to be more powerful in analyzing the advantage of a quantum game than just the presence of entanglement. The connections between Bayesian quantum games and Bell's inequalities will likely continue to give insight to and play a role in analyzing either different games that are formulated, or forms of Bell's inequalities that are derived BIB009 .
Quantum games: a review of the history, current state, and interpretation <s> Stochastic games <s> In a stochastic game the play proceeds by steps from position to position, according to transition probabilities controlled jointly by the two players. We shall assume a finite number, N , of positions, and finite numbers m K , n K of choices at each position; nevertheless, the game may not be bounded in length. If, when at position k , the players choose their i th and j th alternatives, respectively, then with probability s i j k > 0 the game stops, while with probability p i j k l the game moves to position l . Define s = min k , i , j s i j k . Since s is positive, the game ends with probability 1 after a finite number of steps, because, for any number t , the probability that it has not stopped after t steps is not more than (1 − s ) t . ::: ::: Payments accumulate throughout the course of play: the first player takes a i j k from the second whenever the pair i , j is chosen at position k. If we define the bound M: M = max k , i , j | a i j k | , then we see that the expected total gain or loss is bounded by M + ( 1 − s ) M + ( 1 − s ) 2 M + … = M / s . (1) ::: ::: The process therefore depends on N 2 + N matrices P K l = ( p i j k l | i = 1 , 2 , … , m K ; j = 1 , 2 , … , n K ) A K = ( a i j k | … <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Stochastic games <s> "Elegantly written, with obvious appreciation for fine points of higher mathematics...most notable is [the] author's effort to weave classical probability theory into [a] quantum framework." - The American Mathematical Monthly "This is an excellent volume which will be a valuable companion both for those who are already active in the field and those who are new to it. Furthermore there are a large number of stimulating exercises scattered through the text which will be invaluable to students." - Mathematical Reviews An Introduction to Quantum Stochastic Calculus aims to deepen our understanding of the dynamics of systems subject to the laws of chance both from the classical and the quantum points of view and stimulate further research in their unification. This is probably the first systematic attempt to weave classical probability theory into the quantum framework and provides a wealth of interesting features: The origin of Ito's correction formulae for Brownian motion and the Poisson process can be traced to communication relations or, equivalently, the uncertainty principle. Quantum stochastic interpretation enables the possibility of seeing new relationships between fermion and boson fields. Quantum dynamical semigroups as well as classical Markov semigroups are realized through unitary operator evolutions. The text is almost self-contained and requires only an elementary knowledge of operator theory and probability theory at the graduate level. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Stochastic games <s> A model of stochastic games where multiple controllers jointly control the evolution of the state of a dynamic system but have access to different information about the state and action processes is considered. The asymmetry of information among the controllers makes it difficult to compute or characterize Nash equilibria. Using the common information among the controllers, the game with asymmetric information is used to construct another game with symmetric information such that the equilibria of the new game can be transformed to equilibria of the original game. Further, under certain conditions, a Markov state is identified for the new symmetric information game and its Markov perfect equilibria are characterized. This characterization provides a backward induction algorithm to find Nash equilibria of the original game with asymmetric information in pure or behavioral strategies. Each step of this algorithm involves finding Bayesian Nash equilibria of a one-stage Bayesian game. The class of Nash equilibria of the original game that can be characterized in this backward manner are named common information based Markov perfect equilibria. <s> BIB003
Stochastic games extend strategic games to dynamical situations where the actions of the players and history of states affect the evolution of the game. In this section let us review a subset of classical multi-stage stochastic games that are Markovian in evolution. The hope is that it will generate interest in quantizing such multi-stage games leveraging the advances in quantum stochastic calculus BIB002 and quantum stochastics . There is very little work done on quantum Markov decision processes (qMDP) [92] which are specialized quantum games and so there are lot of opportunities to explore in this class of quantum stochastic games. We start our discussions with stochastic games, specialize them to Markov decision processes (MDP), review the literature on quantized MDPs that involve partial observations, introduce quantum probability and quantum Markov semigroups, and finally outline one possible way to quantize stochastic games. A classical stochastic game a la Shapely BIB001 is a tuple (χ, A i (x), Q i (x, a), P(x|x, a), λ, x 0 ), where χ is the finite state space, A i is the finite action space for individual players, Q i is the i th player's payoff function, P is the transition probability function which can be thought of as a random variable because it is a conditional probability and would become a completely positive map in the quantum case, 0 ≤ λ ≤ 1 is the discount factor that is player i's valuation of the game diminishes with time depending on this factor, and x 0 is the initial state of the game. The discount factor is introduced in infinite horizon games so as to have finite values and another way to understand it is to relate λ to the player's patience. How much more does the player value a dollar today than a dollar received in the future which can be quantified by the factor, so as her discount factor increases she values the later amount more and more nearly as much as the earlier payment. A person is more patient the less she minds waiting for something valuable rather than receiving it immediately. In this interpretation higher discount factor implies higher levels of patience. There is yet another reason to discount the future in multi-stage games. The players may not be sure about how long the game will continue. Even in the absence of time preference per se, a player would prefer a dollar today rather than a promise of one tomorrow because of the uncertainty of the future. Put another way, a payoff at a future time is really a conditional payoff conditional on the game lasting that long. The formulation of Shapely has been extended in different directions such as non-zero sum games, states that are infinite (countable as well as uncountable), and the existence of Nash equilibria established under some restricted conditions. For a recent perspective on dynamical games we refer the reader to Ref . The dynamic game starts at x 0 and all the players simultaneously choose apply a strategy s i that is an action from A i depending upon the history of states. The payoffs and the next state of the game are determined by the functions Q and P . The expected payoff for player i is given by Definition 1. A strategy is called a Markov strategy if s i is a strategy that depends only on the state and we will let s i (x) denote the action that player i would choose in state x. A Markov perfect equilibrium (MPE) is an equilibrium on the dynamic game where each player selects a strategy that depend only upon the current state of the game. MPEs are a subset of Nash equilibria for stochastic game. Let us start with the observation that if all players other than i are playing Markov strategies s −i , then player i has a best response that is a Markov strategy. This is easy to see as if there exists a best response where player i plays a i after a history h leading to state x, and plays a' i after another history h' that also leads to state x, then both a i and a' i must yield the same expected payoff to player i. Let us define a quantity V i (x; s −i ) for each state x that is the highest possible payoff player i can achieve starting from state x, given that all other players play the Markov strategies s −i . A Markov best response is given by: Existence of MPE for finite games When the state space, number of players, and actions space are all finite a stochastic game will have a MPE. To see this, let us construct a new game with N*S players where N and S are the number of players and the states respectively of the original game. Then the payoff and action for player (i,x) are given by, This is a finite stochastic game that is guaranteed to have a Nash equilibrium. It is also a MPE as each player's strategy depends only on the current state. By construction, the strategy of player i maximizes his payoff among all Markov strategies given s −i . As shown above each player i has a best response that is a Markov strategy when all opponents play Markov strategies. Definition 2. Two player zero sum stochastic game: A two player zero sum game is defined by an m × n matrix P, where the P ij corresponds to the payoff for player 1 when the two players apply strategies i ∈ A 1 , i = 1, . . . m and j ∈ A 2 , j = 1, . . . n respectively and correspondingly the payoff for the second player is −P ij . When the players use mixed strategies ∆(S 1 ) and ∆(S 2 ) respectively, the game being finite is guaranteed to have a Nash equilibrium as follows: The above mini-max theorem can be extended to stochastic games as shown by Shapley BIB001 . By a symmetric argument reversing B and C we establish the lemma. Let us now consider the stochastic version of this game played in k stages. The value of the game is defined via a function α k : χ → R and operator T which is a contraction as To see that the operator T is a contraction with respect to the supremum norm and thus the game has a fixed point with any initial condition let us consider Theorem 4. Given a two-player zero-sum stochastic game, define α * as the unique solution to α * = T α * . A pair of strategies (s 1 , s 2 ) is a subgame perfect equilibrium if and only if after any history leading to the state x, the expected discounted payoff to player 1 is exactly α * (x). Proof. . Let us suppose the game starts in state x and player 1 plays an optimal strategy for k stages with terminal payoffs α 0 (x') = 0, ∀x' ∈ χ and plays any strategy afterwards. This will guarantee him this payoff This follows from the observation that after k stages the payoff for first player is negative of maximum possible for second player. When k → ∞ the value becomes α * and by symmetrical argument for layer two the theorem is established. Proposition 5. Let s 1 (x), s 2 (x) be optimal (possibly mixed) strategies for players 1 and 2 in zero-sum game defined by the matrix R x (α * ). Then s 1 , s 2 are optimal strategies in the stochastic game for both players; in particular, (s 1 , s 2 ) is an MPE. Proof. . Let us fix a strategy that could possibly be history dependentŝ 2 for player 2. Then, we first consider a k stage game, where terminal payoffs are given by α * . In this game, it follows that player 1 can guarantee a payoff of at least α * (x) by playing the strategy s 1 given in the proposition, irrespective of the strategy of player 2. Thus we have: From this we get we finally get an expression that in the limit k → ∞ goes to α * and by symmetric argument for the second player we establish the result. The method described above is the backward induction algorithm to solve the Bellman equation (16) and is applicable for games with perfect state observation. For games with asymmetric information, that is, when players make independent noisy observations of the state and do not share all their information, we refer the reader to reference BIB003 and the references mentioned therein. Our interest here is confined to games with symmetric information from the quantization point of view as they would be a good starting point. Now, let us consider a special class of stochastic games called Markov decision processes (MDP) where only one player called MAX plays the game against nature, that introduces randomness, with a goal of maximizing a payoff function. It is easy to see that MDP generalizes to a stochastic game with two players MAX and MIN with a zero sum objective. Clearly, we can have results similar to stochastic games in the case of finitely many states and action space MDP with discounted payoff and infinite horizon. , a) , λ, z) with finitely many states and actions and every discount factor λ < 1 there is a pure stationary strategy σ such that for every initial state z and every strategy τ we have Moreover, the stationary strategy σ obeys, for every state z,
Quantum games: a review of the history, current state, and interpretation <s> Quantum probability <s> In his investigations of the mathematical foundations of quantum mechanics, Mackey1 has proposed the following problem: Determine all measures on the closed subspaces of a Hilbert space. A measure on the closed subspaces means a function μ which assigns to every closed subspace a non-negative real number such that if {Ai} is a countable collection of mutually orthogonal subspaces having closed linear span B, then ::: ::: $$ \mu (B) = \sum {\mu \left( {{A_i}} \right)} $$ ::: ::: . <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum probability <s> "Elegantly written, with obvious appreciation for fine points of higher mathematics...most notable is [the] author's effort to weave classical probability theory into [a] quantum framework." - The American Mathematical Monthly "This is an excellent volume which will be a valuable companion both for those who are already active in the field and those who are new to it. Furthermore there are a large number of stimulating exercises scattered through the text which will be invaluable to students." - Mathematical Reviews An Introduction to Quantum Stochastic Calculus aims to deepen our understanding of the dynamics of systems subject to the laws of chance both from the classical and the quantum points of view and stimulate further research in their unification. This is probably the first systematic attempt to weave classical probability theory into the quantum framework and provides a wealth of interesting features: The origin of Ito's correction formulae for Brownian motion and the Poisson process can be traced to communication relations or, equivalently, the uncertainty principle. Quantum stochastic interpretation enables the possibility of seeing new relationships between fermion and boson fields. Quantum dynamical semigroups as well as classical Markov semigroups are realized through unitary operator evolutions. The text is almost self-contained and requires only an elementary knowledge of operator theory and probability theory at the graduate level. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum probability <s> Quantum Probability and Orthogonal Polynomials.- Adjacency Matrices.- Distance-Regular Graphs.- Homogeneous Trees.- Hamming Graphs.- Johnson Graphs.- Regular Graphs.- Comb Graphs and Star Graphs.- The Symmetric Group and Young Diagrams.- The Limit Shape of Young Diagrams.- Central Limit Theorem for the Plancherel Measures of the Symmetric Groups.- Deformation of Kerov's Central Limit Theorem. <s> BIB003
Let us now review the basic concepts in quantum probability and quantum Markov semigroups that would be required to define quantum stochastic games. The central ideas of classical probability consist of random variables and measures that have quantum analogues in self adjoint operators and trace mappings. To get a feel for the theory, let us consider the most familiar example BIB003 of random variables, namely the coin toss. From the equations (39) and (40), it is clear that the self adjoint Pauli operator σ x is stochastically equivalent to the Bernoulli random variable. This moment generating sequence can be visualized as a walk on a graph as follows: Using this we can rewrite (40) as Definition 9. A finite dimensional quantum probability (QP) space is a tuple (H , A, P) where H is a finite dimensional Hilbert space, A is a *-algebra of operators, and P is a trace class operator, specifically a density matrix, denoting the quantum state. As we have alluded earlier, random variables in a CP are stochastically equivalent to observables in a Hilbert space H . These are self-adjoint operators with a spectral resolution X = Σ i x i E i x where the x i 's are the eigenvalues of X and each E X i is interpreted as the event X taking the value x i . States are positive operators with unit trace and denoted by P. In this framework, the expectation of an observable X in the state P is defined using the trace as trP(X). The observables when measured are equivalent to random variables on a probability space, and a collection of such classical spaces constitute a quantum probability space. If all the observables of interest commute with each other then the classical spaces can be composed to a product probability space, and the equivalence CP = QP holds. The main feature of a QP is the admission of possibly non-commuting projections and observables of the underlying Hilbert space within the same setting. Definition 10. Canonical observables: Starting from a σ-finite measure space we can construct observables on a Hilbert space that are called canonical, as every observable can be shown to be unitarily equivalent to the direct sum of them BIB002 . Let (Ω, Γ, µ), be a σ-finite measure space with a countably additive σ−algebra. We can construct the complex Hilbert space as a space of all square integrable functions w.r.t µ and denote it as L 2 (µ). Then, the observable ξ µ : Γ → P(H ) can be set up as where I is the indicator function. Example 11. Let H =C 2 and A = M 2 the *-algebra of complex matrices of dimension 2 × 2 and the state P(A) = ψ, Aψ = A † ψ, ψ where ψ is any unit vector. This space models quantum spin systems in physics and qubits in quantum information processing. This example can be generalized to an n-dimensional space to build quantum probability spaces. Definition 12. Two quantum mechanical observables are said to be compatible, that is they can be measured simultaneously, if the operators representing them can be diagonalized concurrently. The two operators that share a common eigenvector will be characterized as co-measurable. There is a canonical way to create quantum probability spaces from their classical counterparts. The process involves creating a Hilbert space from the square integrable functions with respect to the classical probability measure. The *-algebra of interest is usually defined in terms of creation, conservation, and annihilation operators. Classical probability measures become quantum states in a natural way through Gleason's theorem BIB001 . In this case, unitary operators are identified in the algebra to describe quantum evolutions. A sequence of operators forming a quantum stochastic process can be defined similarly to stochastic processes in a classical probability space. Conditional expectation in a quantum context is not always defined and here we give a version that will be adequate for our purposes and is consistent with its classical counterpart in being a projection and enjoys properties such as tower. Followed by that we will define quantum Markov semigroups (QMS), as a one parameter family of completely positive maps, required for defining the quantum stochastic games.
Quantum games: a review of the history, current state, and interpretation <s> Quantum Markov decision processes <s> "Elegantly written, with obvious appreciation for fine points of higher mathematics...most notable is [the] author's effort to weave classical probability theory into [a] quantum framework." - The American Mathematical Monthly "This is an excellent volume which will be a valuable companion both for those who are already active in the field and those who are new to it. Furthermore there are a large number of stimulating exercises scattered through the text which will be invaluable to students." - Mathematical Reviews An Introduction to Quantum Stochastic Calculus aims to deepen our understanding of the dynamics of systems subject to the laws of chance both from the classical and the quantum points of view and stimulate further research in their unification. This is probably the first systematic attempt to weave classical probability theory into the quantum framework and provides a wealth of interesting features: The origin of Ito's correction formulae for Brownian motion and the Poisson process can be traced to communication relations or, equivalently, the uncertainty principle. Quantum stochastic interpretation enables the possibility of seeing new relationships between fermion and boson fields. Quantum dynamical semigroups as well as classical Markov semigroups are realized through unitary operator evolutions. The text is almost self-contained and requires only an elementary knowledge of operator theory and probability theory at the graduate level. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Quantum Markov decision processes <s> We show that when the speed of control is bounded, there is a widely applicable minimal-time control problem for which a coherent feedback protocol is optimal, and is faster than all measurement-based feedback protocols, where the latter are defined in a strict sense. The superiority of the coherent protocol is due to the fact that it can exploit a geodesic path in Hilbert space, a path that measurement-based protocols cannot follow. <s> BIB002
A quantum Markov Decision Processes (qMDP) is a tuple (χ = C 2 ⊗n , A(x), Q(x, a), T (x|x, a), λ, ρ 0 ). Here, χ is the finite 2 n dimensional complex Hilbert space, A is the finite action space for the single player (unitary operators of the Hilbert space C n such as the Pauli operators), Q is the player's payoff function based on partial observation of the state, T is a completely positive mapping that would induce a quantum Markov semigroup when it is time dependent, 0 ≤ λ ≤ 1 is the discount factor, and ρ 0 is the initial quantum state of the game. In terms of quantum information theory the state of the game can be represented by n qubits and each player applies an unitary operator, a fixed finite set, on a qubit as a strategy. Instead of partially observed state based payoff, a continuous non-demolition measurement based approach can be formulated. In Boutan et al., derived Bellman equations for optimal feedback control of qubit states using ideas from quantum filtering theory. The qubit is coupled to an environment, for example the second quantized electromagnetic radiation, and by continually measuring the field quadratures the state of the qubit (non-demolition measurements) can be estimated. The essential step involves deriving a quantum filtering equation rigorously based on quantum stochastic calculus BIB001 to estimate the state of the system coupled to the environment. By basing the payoff on this estimate the qMDP process can evolve coherently until a desired time. Such an approach can be extended to the quantum stochastic games described next. In addition, coherent evolutions may have advantages over measurement based dynamics BIB002 .
Quantum games: a review of the history, current state, and interpretation <s> Experimental realizations <s> After a brief introduction to the principles and promise of quantum information processing, the requirements for the physical implementation of quantum computation are discussed. These five requirements, plus two relating to the communication of quantum information, are extensively ex- plored and related to the many schemes in atomic physics, quantum optics, nuclear and electron magnetic resonance spectroscopy, superconducting electronics, and quantum-dot physics, for achiev- ing quantum computing. I. INTRODUCTION � The advent of quantum information processing, as an abstract concept, has given birth to a great deal of new thinking, of a very concrete form, about how to create physical computing devices that operate in the hitherto unexplored quantum mechanical regime. The efforts now underway to produce working laboratory devices that perform this profoundly new form of information pro- cessing are the subject of this book. In this chapter I provide an overview of the common objectives of the investigations reported in the remain- der of this special issue. The scope of the approaches, proposed and underway, to the implementation of quan- tum hardware is remarkable, emerging from specialties in atomic physics (1), in quantum optics (2), in nuclear (3) and electron (4) magnetic resonance spectroscopy, in su- perconducting device physics (5), in electron physics (6), and in mesoscopic and quantum dot research (7). This amazing variety of approaches has arisen because, as we will see, the principles of quantum computing are posed using the most fundamental ideas of quantum mechanics, ones whose embodiment can be contemplated in virtually every branch of quantum physics. The interdisciplinary spirit which has been fostered as a result is one of the most pleasant and remarkable fea- tures of this field. The excitement and freshness that has been produced bodes well for the prospect for discovery, invention, and innovation in this endeavor. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Experimental realizations <s> Realizing robust quantum information transfer between long-lived qubit registers is a key challenge for quantum information science and technology. Here we demonstrate unconditional teleportation of arbitrary quantum states between diamond spin qubits separated by 3 meters. We prepare the teleporter through photon-mediated heralded entanglement between two distant electron spins and subsequently encode the source qubit in a single nuclear spin. By realizing a fully deterministic Bell-state measurement combined with real-time feed-forward, quantum teleportation is achieved upon each attempt with an average state fidelity exceeding the classical limit. These results establish diamond spin qubits as a prime candidate for the realization of quantum networks for quantum communication and network-based quantum computing. <s> BIB002
The implementation of quantum games on hardware can be viewed as a small quantum computation, in that sense, the requirements for a good platform on which to perform a quantum game are the same as that for a quantum computer BIB001 . The quantum computer may not need to be a universal computer but requires both single and two-qubit gates. Quantum computers with the capabilities required for quantum games are just beginning to come online (see for example www.research.ibm.com/ibm-q), and full quantum networks are in their infancy BIB002 , thus, many of the experimental demonstrations to date, have been performed on hardware that are not ideal from the point of view of the criterion above. Though notably, unlike many interesting quantum computing algorithms, quantum games are typically performed with very few qubits, making them an attractive application for early demonstrations on emerging quantum hardware. The potential applications and uses of quantum games suggest that certain characteristics are desirable for their implementation, beyond just those desirable for quantum computation. First, by definition, quantum games contain several independent agents. For realistic applications this likely requires that the agents be remotely located. This requires not only a small quantum computation, but also some form of network. The network would need to be able to transmit quantum resources, for example, produce entangled pairs that are shared at two remote locations which is typically done with photons. Second, a quantum game needs to have input from some independent agent, either a human or computer. This may require some wait time for the interaction with a classical system to occur, perhaps while the agent makes their decision. This implies that another desirable characteristic for quantum hardware is to have a quantum memory, that is, the ability to store the quantum information for some variable amount of time during the computation. Typically, the capabilities of a quantum information processor are quantified by the ratio of the coherence time to the time it takes to perform gates. Whereas in quantum games, the actual coherence time of the qubits may be a necessary metric in itself. One may wonder what it even means to perform an experimental demonstration of a quantum game. No experiment has ever had actual independent agents (i.e. humans or computers) play a game on quantum hardware in real time. Thus the implementations of games to date have been in some sense partial implementations. The games are typically implemented by using quantum hardware to run the circuit, or circuit equivalent, of the game with the strategy choices of the Nash equilibria. Where the strategy choices that form the Nash equilibria are determined by theoretical analysis of the quantum game in question. The output states of the experiment are then weighted by the payoff matrix, and the payoff at Nash equilibria is reported and compared to that of the classical case. One can view this as a type of bench-marking experiment where the hardware is bench-marked against a game theoretic application, rather than with random operations. The games that have been implemented always set up to have a larger payoff in the quantum equilibrium than the classical case, presumably because these are the interesting games to quantum physicists. Because of this, the effect of noise or decoherence is almost always to lower the payoff of the players. It is generally seen that the payoffs at equilibrium of the quantum games still outperform the classical game with some amount of decoherence. It should be noted that this section is concerned with evaluating the hardware for quantum games. As such the specific game theoretical results of the games that were performed will not be discussed, only the relative merits of each physical implementation.
Quantum games: a review of the history, current state, and interpretation <s> Nuclear magnetic resonance <s> We generalize the quantum prisoner's dilemma to the case where the players share a nonmaximally entangled states. We show that the game exhibits an intriguing structure as a function of the amount of entanglement with two thresholds which separate a classical region, an intermediate region, and a fully quantum region. Furthermore this quantum game is experimentally realized on our nuclear magnetic resonance quantum computer. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Nuclear magnetic resonance <s> In a three player quantum `Dilemma' game each player takes independent decisions to maximize his/her individual gain. The optimal strategy in the quantum version of this game has a higher payoff compared to its classical counterpart. However, this advantage is lost if the initial qubits provided to the players are from a noisy source. We have experimentally implemented the three player quantum version of the `Dilemma' game as described by Johnson, [N.F. Johnson, Phys. Rev. A 63 (2001) 020302(R)] using nuclear magnetic resonance quantum information processor and have experimentally verified that the payoff of the quantum game for various levels of corruption matches the theoretical payoff. (c) 2007 Elsevier Inc. All rights reserved. <s> BIB002
The first experimental realization of a quantum game was performed on a two qubit NMR quantum computer BIB001 . The computations are performed on the spin to spin interactions of hydrogen atoms embedded in a deuterated nucleic acid cytosine whose spins interact with a strength of 7.17Hz. They examined the payoff of the quantum game at Nash equilibrium as a function of the amount of entanglement. The experimentally determined payoffs showed good agreement with theory, with an error of 8 percent. In total, they computed 19 data points, which each took 300 ms to compute compared to the coherence time of the NMR qubits of ∼ 3 seconds. An NMR system has also demonstrated a three qubit game BIB002 . This game is performed on the hydrogen, fluorine and carbon atoms in a 13 CHF Br 2 molecule. The single qubit resonances are in the hundreds of MHz, while the couplings between spins are tens to hundreds Hz. For their theoretical analysis they used three possible strategy choices, resulting in 27 possible strategy choice sets, which can be classified into 10 classes. They show the output of the populations of all 8 possible states in the three qubit system, and thus the expected payoff, for each class of strategy choices sets. They ran the game 11 times, varying the amount of noise on the initial state, thus direcly showing the decrease of the payoff as the noise increases. Their experimental populations had a discrepancy of 15 to 20 percent with the theory. Quantum computations on NMR based systems are performed on ensembles of qubits, and can have relatively large signal sizes. However, there do not appear to be promising avenues for scaling to larger numbers of qubits, or interfacing with a quantum communication scheme. Also, NMR computers are not capable of initializing in pure quantum states. Thus, methods have been developed to initialize the states to approximate states, [106] , but there is uncertainty as to whether such mixed states actually exhibit entanglement or if they are separable [107] .
Quantum games: a review of the history, current state, and interpretation <s> Linear optics <s> We report the first demonstration of a quantum game on an all- optical one-way quantum computer. Following a recent theoretical proposal we implement a quantum version of Prisoner's Dilemma, where the quantum circuit is realized by a four-qubit box-cluster configuration and the player's local strategies by measurements performed on the physical qubits of the cluster. This demonstration underlines the strength and versatility of the one-way model and we expect that this will trigger further interest in designing quantum protocols and algorithms to be tested in state-of-the-art cluster resources. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Linear optics <s> Quantum gambling --- a secure remote two-party protocol which has no classical counterpart --- is demonstrated through optical approach. A photon is prepared by Alice in a superposition state of two potential paths. Then one path leads to Bob and is split into two parts. The security is confirmed by quantum interference between Alice's path and one part of Bob's path. It is shown that a practical quantum gambling machine can be feasible by this way. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Linear optics <s> Game theory is central to the understanding of competitive interactions arising in many fields, from the social and physical sciences to economics. Recently, as the definition of information is generalized to include entangled quantum systems, quantum game theory has emerged as a framework for understanding the competitive flow of quantum information. Up till now only two-player quantum games have been demonstrated. Here we report the first experiment that implements a four-player quantum Minority game over tunable four-partite entangled states encoded in the polarization of single photons. Experimental application of appropriate quantum player strategies give equilibrium payoff values well above those achievable in the classical game. These results are in excellent quantitative agreement with our theoretical analysis of the symmetric Pareto optimal strategies. Our result demonstrate for the first time how non-trivial equilibria can arise in a competitive situation involving quantum agents and pave the way for a range of quantum transaction applications. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Linear optics <s> We implement a multi-player quantum public-goods game using only bipartite entanglement and two-qubit logic. Within measurement error, the expectation per player follows predicted values as the number of players is increased. <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Linear optics <s> We propose and experimentally demonstrate a zero-sum game that is in a fair Nash equilibrium for classical players, but has the property that a quantum player can always win using an appropriate strategy. The gain of the quantum player is measured experimentally for different quantum strategies and input states. It is found that the quantum gain is maximized by a maximally entangled state, but does not decrease to zero when entanglement disappears. Instead, it links with another kind of quantum correlation described by discord for the qubit case and the connection is demonstrated both theoretically and experimentally. <s> BIB005
Implementing quantum games with optical circuits has several appealing characteristics. They do not suffer from the uncertainty in entanglement of NMR computing and can potentially have very high fidelities. Gates are implemented with standard optical elements such as beam splitters and waveplates. Also, since they are performed on photons, they can naturally be adapted to work with remote agents. One possible implementation is to use a single photon and utilize multiple degrees of freedom. Typically the polarization state of the photon is entangled with its spatial mode. In BIB002 , a heavily attenuated He-Ne laser was used as a single photon source. The single photon is input into a Mach-Zender interferometer where the two paths through the interferometer forms the first qubit pair, and the polarization state of the photon forms the second. They are entangled by splitting the photon into the two paths depending on its polarization. Gates are performed by single photon polarization rotations, i.e. adding waveplates to the photons path. They report an error in the experimentally determined payoff to the theoretical one of 1 to 2 percent. Rather than using the path of an interferometer as the spatial degree of freedom, one can also use the transverse modes of a beam . In these implementations, beams of light are incident on holographic masks to produce higher order transverse modes in a polarization dependent way. These implementations have typically been done at higher light levels, i.e. ∼ mW , and the beams are imaged on a camera to determine the steady state value of many photons being run in parallel. Another possible implementation using linear optics utilizes cluster states. This has been done for a two player Prisoner's Dilemma game BIB001 . The computation is performed with a four-qubit cluster state. Gates are performed by measurements of photons. Spontaneous parametric down conversion in a non-linear BBO crystal produces entangled photon pairs which are then interfered with beam splitters and waveplates. The creation of the four-qubit cluster state is post-selected by coincidence clicks on single photon detectors, so that runs are only counted if four single photo detectors registered a photon. The experimentalists can also characterize their output with full quantum tomography, and they report a fidelity of sixty two percent. Rather than producing cluster states, one can take the entangled photon pairs output from a non-linear crystal and perform gates in much the same way as they are performed in the single photon case BIB004 BIB003 . These approaches have reported fidelities around seventy to eighty percent. A four player quantum game has been implemented with a spontaneous parametric down conversion process that produces four photons, in two entangled pairs BIB004 . Again the information is encoded into the polarization and spatial mode of the photons. This method, with two entangled pair inputs, can naturally be set up to input a continuous set of initial states. The initial entangled state in this implementation is a superposition of a GHZ state and products of Bell states. Again, the fidelities are reported to be near 75 percent which results in errors in the payoff at the equilibrium of about 10 percent. Another example of a linear optical implementation sheds light on other types of correlations that can occur in quantum mechanics that are beyond entanglement, i.e. discord BIB005 . To create states with discord, the measurements are taken with different Bell pairs, and then the data are partitioned out into different sets randomly, which produces a statistical mixture of entangled states. Such a mixture is known to have no entanglement as measured by the concurrence, but retains the quantum correlation of discord. The entangled pairs were produced from spontaneous parametric down conversion. This experiment reported a fidelity of 95 percent. Notably, even when there is no entanglement, the game can still exhibit a quantum advantage. The linear optical implementations are promising because of their ability to perform games with remotely located agents. They are also capable of high fidelity quantum information processing. However they have drawbacks as well. In order to run a different circuit, one must physically rearrange the linear optical elements such as waveplates and beamsplitters which could be done with liquid crystal displays or other photonic devices, though this could be difficult to scale up to implementations of more complicated games. In addition, the production of larger amounts of entangled photon pairs is experimentally challenging. These make it difficult to scale up implementations with linear optical circuits to more complicated games. In addition, purely photonic implementations have no memory, and thus may not be conducive to games that may require wait time for a decision to be made, or some sort of feed forward on measurements.
Quantum games: a review of the history, current state, and interpretation <s> Other proposals <s> We propose a general, scalable framework for implementing two-choices-multiplayer Quantum Games in ion traps. In particular, we discuss two famous examples: the Quantum Prisoners' Dilemma and the Quantum Minority Game. An analysis of decoherence due to intensity fluctuations in the applied laser fields is also provided. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Other proposals <s> In this paper, we propose a scheme for implementing quantum game (QG) in cavity quantum electrodynamics(QED). In the scheme, the cavity is only virtually excited and thus the proposal is insensitive to the cavity fields states and cavity decay. So our proposal can be experimentally realized in the range of current cavity QED techniques. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Other proposals <s> We demonstrate a Bayesian quantum game on an ion trap quantum computer with five qubits. The players share an entangled pair of qubits and perform rotations on their qubit as the strategy choice. Two five-qubit circuits are sufficient to run all 16 possible strategy choice sets in a game with four possible strategies. The data are then parsed into player types randomly in order to combine them classically into a Bayesian framework. We exhaustively compute the possible strategies of the game so that the experimental data can be used to solve for the Nash equilibria of the game directly. Then we compare the payoff at the Nash equilibria and location of phase-change-like transitions obtained from the experimental data to the theory, and study how it changes as a function of the amount of entanglement. <s> BIB003
There are many other potential platforms for quantum information processing and it is unclear which will be dominant in quantum computation. Ion trapped systems and cavity QED systems stand out as having all of the characteristics we desired in a quantum information processor specifically designed for implementing quantum games: they are potentially powerful quantum computers, they can have long memory times, and can be coupled to photonic modes for long distance communication. There have been proposals for implementations of quantum games on such systems BIB001 BIB002 . Ion trapped qubits can perform quantum computations with as many as five qubits with a very high fidelity . In addition, ion trapped systems can also be coupled to single photons for entangling remote ions . Cavity QED systems have a single atom, or ensemble of atoms, strongly coupled to a photonic mode. This allows the quantum information of the atomic system, which can be used for information processing, to be mapped to the photonic system for communication purposes with very high fidelity. Recently BIB003 , a Bayesian quantum game was demonstrated on a five-qubit quantum computer where the payoff, and phase-change-like behavior of the game were analyzed as a function of the amount of entanglement.
Quantum games: a review of the history, current state, and interpretation <s> Human implementations <s> Game theory suggests quantum information processing technologies could provide useful new economic mechanisms. For example, using shared entangled quantum states can alter incentives so as to reduce the free-rider problem inherent in economic contexts such as public goods provisioning. However, game theory assumes players understand fully the consequences of manipulating quantum states and are rational. Its predictions do not always describe human behavior accurately. To evaluate the potential practicality of quantum economic mechanisms, we experimentally tested how people play the quantum version of the prisoner's dilemma game in a laboratory setting using a simulated version of the underlying quantum physics. Even without formal training in quantum mechanics, people nearly achieve the payoffs theory predicts, but do not use mixed-strategy Nash equilibria predicted by game theory. Moreover, this correspondence with game theory for the quantum game is closer than that of the classical game. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Human implementations <s> We describe human-subject laboratory experiments on probabilistic auctions based on previously proposed auction protocols involving the simulated manipulation and communication of quantum states. These auctions are probabilistic in determining which bidder wins, or having no winner, rather than always having the highest bidder win. Comparing two quantum protocols in the context of first-price sealed bid auctions, we find the one predicted to be superior by game theory also performs better experimentally. We also compare with a conventional first price auction, which gives higher performance. Thus to provide benefits, the quantum protocol requires more complex economic scenarios such as maintaining privacy of bids over a series of related auctions or involving allocative externalities. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Human implementations <s> We demonstrate a Bayesian quantum game on an ion trap quantum computer with five qubits. The players share an entangled pair of qubits and perform rotations on their qubit as the strategy choice. Two five-qubit circuits are sufficient to run all 16 possible strategy choice sets in a game with four possible strategies. The data are then parsed into player types randomly in order to combine them classically into a Bayesian framework. We exhaustively compute the possible strategies of the game so that the experimental data can be used to solve for the Nash equilibria of the game directly. Then we compare the payoff at the Nash equilibria and location of phase-change-like transitions obtained from the experimental data to the theory, and study how it changes as a function of the amount of entanglement. <s> BIB003
In addition to the bench-marking types of demonstrations described above, there is a separate interpretation about what it means to implement a quantum game, and that is having actual agents play a game with quantum rules. For real world applications of quantum games, it is interesting to speculate on whether or not it will be possible for players to effectively learn the correct strategies in a quantum game if they have no training in quantum theory. In fact, one of the biggest problems of classical game theory is that players do not act entirely rationally, and thus the equilibria of a game are only a guide to determine what real players will do. This problem may be exacerbated in quantum games by the fact that the players will likely have little or no knowledge of quantum mechanics or entanglement. There have been a few experiments to research this question BIB001 BIB002 . Due to the limited availability of quantum hardware, in order to ease implementation, in these experiments the quantum circuits are simulated on a classical computer. Though a simulated quantum game will give the same outputs as a quantum computer, there are none of the benefits afforded by the absolute physical security of quantum communication protocols, which are likely a very desirable quality of quantum games. In addition, if the quantum games become sufficiently complex, it may not be possible to simulate them efficiently on a classical computer, as the number of states in a computation goes up as 2 n where n is the number of qubits, as is well known in quantum computing. In BIB003 , the players were randomly paired and played the quantum Prisoner's Dilemma game, and the results were compared for classical versus quantum rules. They also performed one experiment where the players played repeatedly with the same partner. In the classical interpretation of the Prisoner's Dilemma, one can interpret the people who play the Pareto optimal strategy choice, even though it is not a Nash equilibrium, as altruistic. In any real instantiation of the game, there will be some players who play the altruistic option, even though, strictly, it lowers their individual payoff. As such, the prediction of the Nash equilibria from game theory can be interpreted as a guide to what players may do, especially in repeated games. When players played the game with quantum rules, the players tended to play the altruistic option more often than in the classical case, as is predicted by the Nash equilibria that occur in the quantum version of the game. This at least shows that players, who had no formal training in quantum mechanics, though had some instruction in the rules of the game, were capable of playing rationally, that is, maximizing their payoff. Interestingly, the game theory prediction for behavior was actually closer to the behavior of the players than when the behavior of players playing a classical game is compared to the classical theory. The players of the classical game played less rationally than those of the quantum game, and there was more variation between players in the classical versions. These results may suggest that the players have more preconceptions about the strategy choices in the classical version than in the quantum version, where the interpretation is more complicated. In the classical version, they can choose to cooperate or defect independently, while in the quantum version ultimately, whether or not they cooperate also depends on the strategy choice of their opponent. Preconceptions about the strategy choices in the classical game may provide influences beyond the desire to simply maximize ones own payoff and lead to larger deviations from the game theory prediction. A full implementation of a quantum game with real players on quantum hardware has not been performed. Yet demonstrations of quantum game circuits on quantum hardware are compelling because they provide results that are interesting while only using small numbers of qubits. As quantum networks and quantum computers become more developed, we expect that quantum games will play a role in their adoption on a larger scale either as applications or as a diagnostic tool of the quantum hardware.
Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> A binary game is introduced and analysed. N players have to choose one of the two sides independently and those on the minority side win. Players use a finite set of ad hoc strategies to make their decision, based on the past record. The analysing power is limited and can adapt when necessary. Interesting cooperation and competition patterns of the society seem to arise and to be responsive to the payoff function. <s> BIB001 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> Recently the concept of quantum information has been introduced into game theory. Here we present the first study of quantum games with more than two players. We discover that such games can possess an alternative form of equilibrium strategy, one which has no analog either in traditional games or even in two-player quantum games. In these ``coherent'' equilibria, entanglement shared among multiple players enables different kinds of cooperative behavior: indeed it can act as a contract, in the sense that it prevents players from successfully betraying one another. <s> BIB002 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> We consider two aspects of quantum game theory: the extent to which the quantum solution solves the original classical game, and to what extent the new solution can be obtained in a classical model. <s> BIB003 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> A quantum version of the Minority game for an arbitrary number of agents is studied. When the number of agents is odd, quantizing the game produces no advantage to the players, however, for an even number of agents new Nash equilibria appear that have no classical analogue. The new Nash equilibria provide far preferable expected payoffs to the players compared to the equivalent classical game. The effect on the Nash equilibrium payoff of reducing the degree of entanglement, or of introducing decoherence into the model, is indicated. <s> BIB004 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> We consider a directed network in which every edge possesses a latency function that specifies the time needed to traverse the edge given its congestion. Selfish, noncooperative agents constitute the network traffic and wish to travel from a source vertex s to a destination t as quickly as possible. Since the route chosen by one network user affects the congestion experienced by others, we model the problem as a noncooperative game. Assuming that each agent controls only a negligible portion of the overall traffic, Nash equilibria in this noncooperative game correspond to s-t flows in which all flow paths have equal latency.A natural measure for the performance of a network used by selfish agents is the common latency experienced by users in a Nash equilibrium. Braess's Paradox is the counterintuitive but well-known fact that removing edges from a network can improve its performance. Braess's Paradox motivates the following network design problem: given a network, which edges should be removed to obtain the best flow at Nash equilibrium? Equivalently, given a network of edges that can be built, which subnetwork will exhibit the best performance when used selfishly?We give optimal inapproximability results and approximation algorithms for this network design problem. For example, we prove that there is no approximation algorithm for this problem with approximation ratio less than n/2, where n is the number of network vertices, unless P = NP. We further show that this hardness result is the best possible, by exhibiting an (n/2)-approximation algorithm. We also prove tight inapproximability results when additional structure, such as linearity, is imposed on the network latency functions.Moreover, we prove that an optimal approximation algorithm for these problems is the trivial algorithm: given a network of candidate edges, build the entire network. As a consequence, we show that Braess's Paradox--even in its worst-possible manifestations--is impossible to detect efficiently.En route to these results, we give a fundamental generalization of Braess's Paradox: the improvement in performance that can be effected by removing edges can be arbitrarily large in large networks. Even though Braess's Paradox has enjoyed 35 years as a textbook example, our result is the first to extend its severity beyond that in Braess's original four-node network. <s> BIB005 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> The digital revolution of the information age and in particular the sweeping changes of scientific communication brought about by computing and novel communication technology, potentiate global, high grade scientific information for free. The arXiv for example is the leading scientific communication platform, mainly for mathematics and physics, where everyone in the world has free access on. While in some scientific disciplines the open access way is successfully realized, other disciplines (e.g. humanities and social sciences) dwell on the traditional path, even though many scientists belonging to these communities approve the open access principle. In this paper we try to explain these different publication patterns by using a game theoretical approach. Based on the assumption, that the main goal of scientists is the maximization of their reputation, we model different possible game settings, namely a zero sum game, the prisoners’ dilemma case and a version of the stag hunt game, that show the dilemma of scientists belonging to “non-open access communities”. From an individual perspective, they have no incentive to deviate from the Nash Equilibrium of traditional publishing. By extending the model using the quantum game theory approach it can be shown, that if the strength of entanglement exceeds a certain value, the scientists will overcome the dilemma and terminate to publish only traditionally in all three settings. <s> BIB006 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> Recent spectrum-sharing research has produced a strategy to address spectrum scarcity problems. This novel idea, named cognitive radio, considers that secondary users can opportunistically exploit spectrum holes left temporarily unused by primary users. This presents a competitive scenario among cognitive users, making it suitable for game theory treatment. In this work, we show that the spectrum-sharing benefits of cognitive radio can be increased by designing a medium access control based on quantum game theory. In this context, we propose a model to manage spectrum fairly and effectively, based on a multiple-users multiple-choice quantum minority game. By taking advantage of quantum entanglement and quantum interference, it is possible to reduce the probability of collision problems commonly associated with classic algorithms. Collision avoidance is an essential property for classic and quantum communications systems. In our model, two different scenarios are considered, to meet the requirements of different user strategies. The first considers sensor networks where the rational use of energy is a cornerstone; the second focuses on installations where the quality of service of the entire network is a priority. <s> BIB007 </s> Quantum games: a review of the history, current state, and interpretation <s> Future applications for quantum game theory <s> We discuss the connection between a class of distributed quantum games, with remotely located players, to the counter intuitive Braess' paradox of traffic flow that is an important design consideration in generic networks where the addition of a zero cost edge decreases the efficiency of the network. A quantization scheme applicable to non-atomic routing games is applied to the canonical example of the network used in Braess' Paradox. The quantum players are modeled by simulating repeated game play. The players are allowed to sample their local payoff function and update their strategies based on a selfish routing condition in order to minimize their own cost, leading to the Wardrop equilibrium flow. The equilibrium flow in the classical network has a higher cost than the optimal flow. If the players have access to quantum resources, we find that the cost at equilibrium can be reduced to the optimal cost, resolving the paradox. <s> BIB008
Applications of conventional game theory have played an important role in many modern strategic decisionmaking processes including diplomacy, economics, national security, and business. These applications typically reduce a domain-specific problem into a game theoretic setting such that a domain-specific solution can be developed by studying the game-theoretic solution. A well-known application example is the game of "Chicken" applied to studies of international politics during the Cold War . In Chicken, two players independently select from strategies that either engage in conflict or avoid it. Schelling has cited the study of this game as influential in understanding the Cuban Missile crisis. More broadly, studying these games enables an understanding of how rational and irrational players select strategies, an insight that has played an important role in nuclear brinkmanship. The method of recognizing formal game-theoretic solutions within domain-specific applications may also extend to quantum game-theoretic concepts. This requires a model for the game that accounts for the inclusion of unique quantum resource including shared entangled states. For example, Zabaleta et al. BIB007 have investigated a quantum game model for the problem of spectrum sharing in wireless communication environments in which transmitters compete for access. Their application is cast as a version of the minority game put forward by Challet and Zhang BIB001 and first studied in quantized form by Benjamin and Hayden BIB002 and also by Flitney and Hollenberg BIB004 . For Zabaleta et al., a base station distributes an n-partite entangled quantum state among n individual transmitters, i.e., players, who then apply local strategies to each part of the quantum state before measuring. Based on the observed, correlated outcomes, the players select whether to transmit (1) or wait (0). Zabaleta et al. showed that using the quantum resource in this game reduces the probability of transmission collision by a factor of n while retaining fairness in access management. In a related application, Solmeyer et al. investigated a quantum routing game for sending transmissions through a communication network BIB008 . The conventional routing game has been extensively studied as a representation of flow strategies in real-world network, for example, Braess' paradox that adding more routes does not always improve flow BIB005 . Solmeyer et al. developed a quantized version of the routing game modified to include a distributed quantum state between players representing the nodes within the network. Each player is permitted to apply a local quantum strategy to their part of the state in the form of a unitary rotation before measuring. Solmeyer et al. simulated the total cost of network flow in terms of overall latency and found that the minimal cost is realized when using a partially entangled state between nodes. Notably, their results has demonstrated Braess' paradox but only for the case of maximal and vanishing entanglement. If, and when, quantum networks become a reality, with multiple independent quantum agents operating distributed applications, quantum game theory may not only provide possible applications, but may also be necessary for their analysis. In the field of decision science, Hanauske et al. have applied quantum game theory to the selection of open access publishing decisions in scientific literature BIB006 . Motivated by the different publication patterns observed across scientific disciplines, they perform a comparative analysis of open-access choices using three different games: zero-sum, Prisoner's Dilemma and stag hunt. The formal solutions from each of these classical games provide Nash equilibria that either discourage open access publication or include this choice as a minority in a mixed strategy. By contrast, Hanauske et al. found that quantized versions of these games which include distributed quantum resources yield Nash equilibria that favor open access publication. In this case, quantum game theory may provide a more general probability theory to form a descriptive analysis of a such socially constructed environments. In addition to decision making applications, game theory may also serve as a model for understanding competitive processes such as those found in ecological or social systems. It is an outstanding question to assess whether quantum game theory can provide new methods to these studies. In addition to the study of classical processes, such as evolution and natural selection, quantum game theory also shows promise for the study of strictly quantum mechanical processes as well. In particular, several non-cooperative processes underlying existing approaches to the development of quantum technology including quantum control, quantum error correction, and fault-tolerant quantum operations. Each of these application areas require a solution to the competition between the user and the environment, which may be considered to be a 'player' in the game theoretic setting. The solutions to these specific applications require a model of the quantum mechanical processes for dynamics and interactions which are better suited for quantum game theory. A fundamental concern for any practical application of game theory is the efficiency of the implementation. A particular concern for a quantum game solution is the relative cost of quantum resources, including entangled states and measurements operations. Currently, high-fidelity, addressable qubits are expensive to fabricate, store, operate, and measure, though these quantum resources are likely to reduce in relative cost over time. For some users, however, the expense of not finding the best solution will always outweigh the associated implementation cost, and the cost argument need not apply for those application where quantum game theory provides a truly unique advantage. van Enk and Pike have remarked that some quantum games can be reduced to similar classical game, often by incorporating a classical source of advice BIB003 . The effect of this advice is to introduce correlations into the player strategies in a way that is similar to how a distributed quantum state provides means of generating correlated strategies. For example, Brunner and Liden have shown how non-local outcomes in Bell's test can be modeled by conventional Bayesian game theory . This raises the question as to whether it is ever necessary to formulate a problem in a quantum game theoretic setting. As demonstrated above, there are many situations for which distributed quantum resources offer a more natural application, e.g., quantum networking, and such formulations are at least realistic if not necessary. The current availability of prototype general-purpose, quantum processors provides opportunities for the continued study of quantum game theory. This will include experimental studies of how users interacts with quantum games as well as the translation of quantum strategies into real-world settings. However, quantum networks are likely to be needed for field testing quantum game applications, as most require the distribution of a quantum resource between multiple players. Alongside moderate duration quantum memories and high-fidelity entangling operations, these quantum networks must also provide players with synchronized classical control frameworks and infrastructure. These prototype quantum gaming networks may then evolve toward more robust routing methods.
Kernels for Vector-Valued Functions: A Review <s> Introduction <s> This paper investigates learning in a lifelong context. Lifelong learning addresses situations in which a learner faces a whole stream of learning tasks. Such scenarios provide the opportunity to transfer knowledge across multiple learning tasks, in order to generalize more accurately from less training data. In this paper, several different approaches to lifelong learning are described, and applied in an object recognition domain. It is shown that across the board, lifelong learning approaches generalize consistently more accurately from less training data, by their ability to transfer knowledge across learning tasks. <s> BIB001 </s> Kernels for Vector-Valued Functions: A Review <s> Introduction <s> In this paper, we address the problem of statistical learning for multi-topic text categorization (MTC), whose goal is to choose all relevant topics (a label) from a given set of topics. The proposed algorithm, Maximal Margin Labeling (MML), treats all possible labels as independent classes and learns a multi-class classifier on the induced multi-class categorization problem. To cope with the data sparseness caused by the huge number of possible labels, MML combines some prior knowledge about label prototypes and a maximal margin criterion in a novel way. Experiments with multi-topic Web pages show that MML outperforms existing learning algorithms including Support Vector Machines. <s> BIB002 </s> Kernels for Vector-Valued Functions: A Review <s> Introduction <s> Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems. <s> BIB003 </s> Kernels for Vector-Valued Functions: A Review <s> Introduction <s> In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a "free-form" covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets. <s> BIB004 </s> Kernels for Vector-Valued Functions: A Review <s> Introduction <s> In this paper, we describe a novel, computationally efficient algorithm that facilitates the autonomous acquisition of readings from sensor networks (deciding when and which sensor to acquire readings from at any time), and which can, with minimal domain knowledge, perform a range of information processing tasks including modelling the accuracy of the sensor readings, predicting the value of missing sensor readings, and predicting how the monitored environmental variables will evolve into the future. Our motivating scenario is the need to provide situational awareness support to first responders at the scene of a large scale incident, and to this end, we describe a novel iterative formulation of a multi-output Gaussian process that can build and exploit a probabilistic model of the environmental variables being measured (including the correlations and delays that exist between them). We validate our approach using data collected from a network of weather sensors located on the south coast of England. <s> BIB005 </s> Kernels for Vector-Valued Functions: A Review <s> Introduction <s> We introduce Gaussian process dynamical models (GPDMs) for nonlinear time series analysis, with applications to learning models of human pose and motion from high-dimensional motion capture data. A GPDM is a latent variable model. It comprises a low-dimensional latent space with associated dynamics, as well as a map from the latent space to an observation space. We marginalize out the model parameters in closed form by using Gaussian process priors for both the dynamical and the observation mappings. This results in a nonparametric model for dynamical systems that accounts for uncertainty in the model. We demonstrate the approach and compare four learning algorithms on human motion capture data, in which each pose is 50-dimensional. Despite the use of small data sets, the GPDM learns an effective representation of the nonlinear dynamics in these spaces. <s> BIB006
Many modern applications of machine learning require solving several decision making or prediction problems and exploiting dependencies between the problems is often the key to obtain better results and coping with a lack of data (to solve a problem we can borrow strength from a distinct but related problem). In sensor networks, for example, missing signals from certain sensors may be predicted by exploiting their correlation with observed signals acquired from other sensors BIB005 . In geostatistics, predicting the concentration of heavy pollutant metals, which are expensive to measure, can be done using inexpensive and oversampled variables as a proxy . In computer graphics, a common theme is the animation and simulation of physically plausible humanoid motion. Given a set of poses that delineate a particular movement (for example, walking), we are faced with the task of completing a sequence by filling in the missing frames with natural-looking poses. Human movement exhibits a high-degree of correlation. Consider, for example, the way we walk. When moving the right leg forward, we unconsciously prepare the left leg, which is currently touching the ground, to start moving as soon as the right leg reaches the floor. At the same time, our hands move synchronously with our legs. We can exploit these implicit correlations for predicting new poses and for generating new natural-looking walking sequences BIB006 . In text categorization, one document can be assigned to multiple topics or have multiple labels BIB002 . In all the examples above, the simplest approach ignores the potential correlation among the different output components of the problem and employ models that make predictions individually for each output. However, these examples suggest a different approach through a joint prediction exploiting the interaction between the different components to improve on individual predictions. Within the machine learning community this type of modeling is often broadly referred to to as multitask learning. Again the key idea is that information shared between different tasks can lead to improved performance in comparison to learning the same tasks individually. These ideas are related to transfer learning BIB001 BIB003 BIB004 , a term which refers to systems that learn by transferring knowledge between different domains, for example: "what can we learn about running through seeing walking?" More formally, the classical supervised learning problem requires estimating the output for any given input x * ; an estimator f * (x * ) is built on the basis of a training set consisting of N input-output pairs S = (X, Y) = (x1, y1), . . . , (xN , yN ) . The input space X is usually a space of vectors, while the output space is a space of scalars. In multiple output learning (MOL) the output space is a space of vectors; the estimator is now a vector valued function f . Indeed, this situation can also be described as the problem of solving D distinct classical supervised problems, where each problem is described by one of the components f1, . . . , fD of f . As mentioned before, the key idea is to work under the assumption that the problems are in some way related. The idea is then to exploit the relation among the problems to improve upon solving each problem separately. The goal of this survey is twofold. First, we aim at discussing recent results in multi-output/multi-task learning based on kernel methods and Gaussian processes providing an account of the state of the art in the field. Second, we analyze systematically the connections between Bayesian and regularization (frequentist) approaches. Indeed, related techniques have been proposed from different perspectives and drawing clearer connections can boost advances in the field, while fostering collaborations between different communities. The plan of the paper follows. In chapter 2 we give a brief review of the main ideas underlying kernel methods for scalar learning, introducing the concepts of regularization in reproducing kernel Hilbert spaces and Gaussian processes. In chapter 3 we describe how similar concepts extend to the context of vector valued functions and discuss different settings that can be considered. In chapters 4 and 5 we discuss approaches to constructing multiple output kernels, drawing connections between the Bayesian and regularization frameworks. The parameter estimation problem and the computational complexity problem are both described in chapter 6. In chapter 7 we discuss some potential applications that can be seen as multi-output learning. Finally we conclude in chapter 8 with some remarks and discussion.